Deployment of a Subway Bearer Network Featuring High-Speed Self Recovery
Service Requirements and Solution Description
Service Requirements
Economic and social development makes traveling by subway become a major way to avoid traffic congestion in cities. A more diverse range of IP services and increasing data traffic require a highly secure and reliable subway public transportation system. The legacy subway bearer network can no longer meet these requirements, and a more robust, reliable bearer network is required by a digital subway system. A modernized subway bearer network needs to meet the following requirements:
- Ensures high reliability and security: Subways belong to the public transportation system, requiring the subway bearer network to be reliable and secure.
- Provides sufficient data capacity: The subway system has high passenger traffic and increasing data terminals, requiring the subway bearer network to provide sufficient data capacity and data switching capacity.
- Supports a diverse range of service types: The subway system involves different service types such as the control system, advertising media, and daily office, requiring the subway bearer network to support a diverse range of service types.
The IP data communication network is the mainstream data communication network, supports various access modes, and has a large network scale. Constructing an IP-based subway bearer network has become a trend in future development.
Huawei offers the hierarchy of VPN (HoVPN)-based High-Speed Self Recovery (HSR) solution to implement secure and reliable subway system operation and support a diverse range of service types for the subway system. The HSR solution uses Huawei agile switches to construct a hierarchical network based on MPLS L3VPN technology, provides powerful service supporting capabilities and simple as well as flexible networking modes, and is suitable for large-scale subway bearer networks. This solution adopts multiple protection technologies, including bidirectional forwarding detection (BFD), TE hot standby (HSB), VPN fast reroute (FRR), and traffic forwarding on the Virtual Route Redundancy Protocol (VRRP) backup device and provides protection switchovers within milliseconds to complete an end-to-end link switchover without being detected by users.
Solution Overview
The HoVPN-based HSR solution is designed to ensure network reliability, scalability, maintainability, and multi-service supporting capability, provide a hierarchical network structure, and reduce networking costs. Figure 2-98 shows the network topology in the HSR solution.
In Figure 2-98,
Three S12700E switches on the core layer are fully connected to form a core ring, while the data center site and two subway sites exchange data across the core ring.
Two S6730-H switches are deployed as aggregation switches at each subway site and form a looped square topology with two S12700E switches on the core ring. Alternatively, S6730-H switches at multiple sites are connected in serial networking and then form a looped square topology with two S12700E switches on the core ring. S6730-H switches have VRRP configured to function as the user gateways of each subway site. The data center site uses two S12700E switches as aggregation switches and deploys the same services as S6730-H switches on the two S12700E switches.
Layer 2 switches are deployed on the access layer at each site to form an access ring and are dual-homed to two S6730-H switches at subway sites or two S12700E switches at the data center site.
This network transmits all service traffic of the subway system, including traffic of routine office, advertising media, and train control management.
Service Deployment
Item |
Description |
---|---|
IGP |
Use OSPF as an IGP and run OSPF between aggregation and core switches to ensure that there are reachable routes between these switches and establish Multiprotocol Label Switching (MPLS) Label Distribution Protocol (LDP) and MPLS Traffic Engineering (TE) tunnels using OSPF routes. |
BGP |
Deploy Multiprotocol Border Gateway Protocol (MP-BGP) to implement L3VPN. Establish Internal BGP (IBGP) neighbor relationships between aggregation and core switches, and between core switches, and advertise VPN routes. |
Routing policies |
Use routing policies to set the route preferred value and community attribute to filter, select, and back up routes. |
MPLS LDP |
Run LDP between aggregation and core switches to transmit L3VPN data on links for label switching. Configure BFD for label switched paths (LSPs) to implement fast link switchovers. |
MPLS TE |
Deploy MPLS TE tunnels to transmit L3VPN traffic. That is, establish the primary and backup TE tunnels between each S6730-H switch and its directly connected S12700E, and establish the primary and backup tunnels between each S12700E switch and its directly connected S6730-H switch. Enable TE HSB and configure BFD for TE HSB to allow traffic to be switched from the faulty primary TE tunnel to the backup TE tunnel within 50 ms. |
L3VPN |
Configure different VPNs for services such as daily office, advertising media, and train control management to isolate these services. In this scenario, one VPN is configured as an example. |
BFD |
Use BFD on each node to detect faults and implement fast traffic switchovers in case of faults. In this example, you need to deploy multiple services, including BFD for VRRP, BFD for LSP, and BFD for TE, to complete end-to-end switchovers within 50 ms. |
TE HSB |
Establish bidirectional TE tunnels between S6730-H aggregation switches and S12700E core switches, and deploy HSB for MPLS TE tunnels to provide the primary and backup constraint-based routed label switched paths (CR-LSPs) for each TE tunnel. Configure BFD for CR-LSPs to fast detect CR-LSP faults. If a fault occurs on the primary CR-LSP, L3VPN traffic can be fast switched to the hot-standby CR-LSP, providing end-to-end (E2E) traffic protection. |
Hybrid FRR |
Enable IP + VPN hybrid FRR on S6730-H switches. If a fault occurs on the downlink access link, the connected interface on one S6730-H will detect the fault and fast switch traffic to the other S6730-H, which then forwards traffic to access devices. |
VRRP |
Deploy VRRP between two S6730-H switches to implement gateway backup for access users. Configure BFD for VRRP to speed up fault detection, VRRP convergence, and traffic switchovers. To prevent traffic loss caused by aggregation switch faults and shorten service interruptions, you also need to configure the VRRP backup device to forward service traffic. |
Device Selection and Restrictions
NE |
Device Selection and Restrictions |
---|---|
Core nodes and data center aggregation nodes |
Use S12700E switches as core nodes and data center aggregation nodes, and install MPUEs and X series LPUs on these switches. To ensure reliability, ensure that:
|
Aggregation nodes at subway sites |
Use S6730-H switches as aggregation switches. |
Version Requirements
Version |
Matching Devices |
---|---|
V200R019C10 and later versions |
Use S12700E switches as core devices and S6730-H switches as aggregation devices. NOTE:
The following uses S series switches running V200R019C10 as an example to describe the configuration procedure. |
Basic Configurations
Data Plan
Network Topology
Construct a network based on the topology shown in Figure 2-99, name network devices, configure IP addresses for network devices and the service interfaces as well as user interfaces of the devices.
Interface Data Plan
Table 2-144 and Table 2-145 list interfaces and their IP addresses on devices.
NE Role | Interface Number | Member Interface |
---|---|---|
Core_SPE1 |
Eth-Trunk 4 |
XGigabitEthernet5/0/4 XGigabitEthernet5/0/5 XGigabitEthernet5/0/6 XGigabitEthernet5/0/7 |
Eth-Trunk 5 |
XGigabitEthernet1/0/0 XGigabitEthernet1/0/1 XGigabitEthernet1/0/2 XGigabitEthernet1/0/3 |
|
Eth-Trunk 17 |
XGigabitEthernet6/0/0 XGigabitEthernet6/0/1 XGigabitEthernet6/0/2 XGigabitEthernet6/0/3 |
|
Core_SPE2 |
Eth-Trunk 4 |
XGigabitEthernet6/0/4 XGigabitEthernet6/0/5 XGigabitEthernet6/0/6 XGigabitEthernet6/0/7 |
Eth-Trunk 2 |
XGigabitEthernet3/0/4 XGigabitEthernet3/0/5 XGigabitEthernet3/0/6 XGigabitEthernet3/0/7 |
|
Eth-Trunk 17 |
XGigabitEthernet5/0/0 XGigabitEthernet5/0/1 XGigabitEthernet5/0/2 XGigabitEthernet5/0/3 |
|
Core_SPE3 |
Eth-Trunk 5 |
XGigabitEthernet1/0/0 XGigabitEthernet1/0/1 XGigabitEthernet1/0/2 XGigabitEthernet1/0/3 |
Eth-Trunk 2 |
XGigabitEthernet2/0/4 XGigabitEthernet2/0/5 XGigabitEthernet2/0/6 XGigabitEthernet2/0/7 |
|
Site1_UPE1 |
Eth-Trunk 17 |
XGigabitEthernet1/0/0 XGigabitEthernet1/0/1 XGigabitEthernet1/0/2 XGigabitEthernet1/0/3 |
Eth-Trunk 7 |
XGigabitEthernet4/0/4 XGigabitEthernet4/0/5 XGigabitEthernet4/0/6 XGigabitEthernet4/0/7 |
|
Site1_UPE2 |
Eth-Trunk 17 |
XGigabitEthernet6/0/0 XGigabitEthernet6/0/1 XGigabitEthernet6/0/2 XGigabitEthernet6/0/3 |
Eth-Trunk 7 |
XGigabitEthernet6/0/4 XGigabitEthernet6/0/5 XGigabitEthernet6/0/6 XGigabitEthernet6/0/7 |
NE Role | Local Interface | IP Address | Description |
---|---|---|---|
Core_SPE1 |
Loopback 1 |
172.16.0.5/32 |
- |
Eth-Trunk 4 |
172.17.4.8/31 |
Core_SPE1 to Core_SPE2 |
|
Eth-Trunk 5 |
172.17.4.2/31 |
Core_SPE1 to Core_SPE3 |
|
Eth-Trunk 17 |
172.17.4.10/31 |
Core_SPE1 to Site1_UPE1 |
|
XGigabitEthernet6/0/4 |
172.17.10.2/31 |
Core_SPE1 to Site3_UPE6 |
|
Core_SPE2 |
Loopback 1 |
172.16.0.3/32 |
- |
Eth-Trunk 4 |
172.17.4.9/31 |
Core_SPE2 to Core_SPE1 |
|
Eth-Trunk 2 |
172.17.4.0/31 |
Core_SPE2 to Core_SPE3 |
|
Eth-Trunk 17 |
172.17.4.12/31 |
Core_SPE2 to Site1_UPE2 |
|
XGigabitEthernet5/0/5 |
172.16.8.178/31 |
Core_SPE2 to Site2_UPE3 |
|
Core_SPE3 |
Loopback 1 |
172.16.0.4/32 |
- |
Eth-Trunk 5 |
172.17.4.3/31 |
Core_SPE3 to Core_SPE1 |
|
Eth-Trunk 2 |
172.17.4.1/31 |
Core_SPE3 to Core_SPE2 |
|
XGigabitEthernet6/0/1 |
172.16.8.213/31 |
Core_SPE3 to Site3_UPE5 |
|
XGigabitEthernet6/0/3 |
172.16.8.183/31 |
Core_SPE3 to Site2_UPE4 |
|
Site1_UPE1 |
Loopback 1 |
172.16.2.51/32 |
- |
Eth-Trunk 17 |
172.17.4.11/31 |
Site1_UPE1 to Core_SPE1 |
|
Eth-Trunk 7 |
172.17.4.14/31 |
Site1_UPE1 to Site1_UPE2 |
|
XGigabitEthernet1/0/4.200 |
172.18.200.66/26 |
Site1_UPE1 to CE1 |
|
Site1_UPE2 |
Loopback 1 |
172.16.2.50/32 |
- |
Eth-Trunk 17 |
172.17.4.13/31 |
Site1_UPE2 to Core_SPE2 |
|
Eth-Trunk 7 |
172.17.4.15/31 |
Site1_UPE2 to Site1_UPE1 |
|
XGigabitEthernet1/0/4.200 |
172.18.200.67/26 |
Site1_UPE2 to CE1 |
|
Site2_UPE3 |
Loopback 1 |
172.16.2.75/32 |
- |
XGigabitEthernet0/0/1 |
172.16.8.179/31 |
Site2_UPE3 to Core_SPE2 |
|
XGigabitEthernet0/0/4 |
172.16.8.180/31 |
Site2_UPE3 to Site2_UPE4 |
|
XGigabitEthernet0/0/2.150 |
172.18.150.2/26 |
Site2_UPE3 to CE2 |
|
Site2_UPE4 |
Loopback 1 |
172.16.2.76/32 |
- |
XGigabitEthernet0/0/1 |
172.16.8.182/31 |
Site2_UPE4 to Core_SPE3 |
|
XGigabitEthernet0/0/4 |
172.16.8.181/31 |
Site2_UPE4 to Site2_UPE3 |
|
XGigabitEthernet0/0/2.150 |
172.18.150.3/26 |
Site2_UPE4 to CE2 |
|
Site3_UPE5 |
Loopback 1 |
172.16.2.87/32 |
- |
XGigabitEthernet0/0/4 |
172.16.8.212/31 |
Site3_UPE5 to Core_SPE3 |
|
XGigabitEthernet0/0/1 |
172.17.10.0/31 |
Site3_UPE5 to Site3_UPE6 |
|
XGigabitEthernet0/0/2.100 |
172.18.100.2/26 |
Site3_UPE5 to CE3 |
|
Site3_UPE6 |
Loopback 1 |
172.16.2.86/32 |
- |
XGigabitEthernet0/0/4 |
172.17.10.3/31 |
Site3_UPE6 to Core_SPE1 |
|
XGigabitEthernet0/0/1 |
172.17.10.1/31 |
Site3_UPE6 to Site3_UPE5 |
|
XGigabitEthernet0/0/2.100 |
172.18.100.3/26 |
Site3_UPE6 to CE3 |
Configuring Device Information
Data Plan
Set parameters based on network requirements (such as the network scale and topology). The following table lists the recommended values and precautions for reference only.
Configure device information on all devices based on the network topology.
Device information includes the site name, device role, and device number. Each device is named in the format of AA_BBX.
- AA: indicates the site name, such as Core and Site1.
- BB: indicates the device role, such as superstratum provider edge (SPE), user-end provider edge (UPE), and customer edge (CE).
- X: indicates the device number, starting from 1.
For example, Site1_UPE1 indicates a UPE numbered 1 at site 1. The following table describes the data plan.
Parameter | Value | Description |
---|---|---|
sysname | Site1_UPE1 | Device Name |
Configuring Interfaces
Procedure
- Add physical interfaces to Eth-Trunk interfaces.
The following uses the configuration of Core_SPE1 as an example. The configurations of other devices are similar to that of Core_SPE1.
# interface XGigabitEthernet1/0/0 eth-trunk 5 # interface XGigabitEthernet1/0/1 eth-trunk 5 # interface XGigabitEthernet1/0/2 eth-trunk 5 # interface XGigabitEthernet1/0/3 eth-trunk 5 # interface XGigabitEthernet5/0/4 eth-trunk 4 # interface XGigabitEthernet5/0/5 eth-trunk 4 # interface XGigabitEthernet5/0/6 eth-trunk 4 # interface XGigabitEthernet5/0/7 eth-trunk 4 # interface XGigabitEthernet6/0/0 eth-trunk 17 # interface XGigabitEthernet6/0/1 eth-trunk 17 # interface XGigabitEthernet6/0/2 eth-trunk 17 # interface XGigabitEthernet6/0/3 eth-trunk 17 #
- Configure descriptions and IP addresses for interfaces.
The following uses Core_SPE1 as an example to describe how to configure the interface description, IP address, and Eth-Trunk interface working mode. The configurations of other devices are similar to that of Core_SPE1.
# interface Eth-Trunk4 undo portswitch description Core_SPE1 to Core_SPE2 ip address 172.17.4.8 255.255.255.254 mode lacp # interface Eth-Trunk5 undo portswitch description Core_SPE1 to Core_SPE3 ip address 172.17.4.2 255.255.255.254 mode lacp # interface Eth-Trunk17 undo portswitch description Core_SPE1 to Site1_UPE1 ip address 172.17.4.10 255.255.255.254 mode lacp # interface XGigabitEthernet6/0/4 undo portswitch description Core_SPE1 to Site3_UPE6 ip address 172.17.10.2 255.255.255.254 # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.0.5 255.255.255.255 #
- Configure Eth-Trunk interfaces to function as 40GE interfaces.
Run the least active-linknumber 4 command on Eth-Trunk interfaces of all S9700 switches to configure the Eth-Trunk interfaces to function as 40GE interfaces. If a member interface of an Eth-Trunk interface goes Down, the Eth-Trunk interface goes Down. The following uses the configuration of Core_SPE1 as an example. The configurations of other devices are similar to that of Core_SPE1.
# interface Eth-Trunk4 least active-linknumber 4 # interface Eth-Trunk5 least active-linknumber 4 # interface Eth-Trunk17 least active-linknumber 4 #
- Create Eth-Trunk load balancing profiles and apply the profiles to Eth-Trunk interfaces.
Configure load balancing based on the source and destination port numbers. The following uses the configuration of Core_SPE1 as an example. The configurations of other devices are similar to that of Core_SPE1.
# load-balance-profile CUSTOM ipv6 field l4-sport l4-dport ipv4 field l4-sport l4-dport # interface Eth-Trunk4 load-balance enhanced profile CUSTOM # interface Eth-Trunk5 load-balance enhanced profile CUSTOM # interface Eth-Trunk17 load-balance enhanced profile CUSTOM #
- Disable STP globally.
All devices on the entire network are connected through Layer 3 interfaces, and Layer 2 loop prevention protocols are not required. Therefore, disable STP globally. The following uses the configuration of Core_SPE1 as an example. The configurations of other devices are similar to that of Core_SPE1.
# stp disable #
Enabling BFD
Context
- The MPU must be MPUA, MPUD, or MPUE.
- For the S6730-H, the set service-mode command must be run to configure the switch to work in enhanced BFD mode.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
# bfd #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
# bfd #
Deploying OSPF
Deployment Roadmap
Deployment Roadmap
Configure OSPF as an IGP to ensure that there are reachable routes between devices on the entire network, and establish MPLS LDP and MPLS TE tunnels using OSPF routes. The configuration roadmap is as follows:
- Add all devices to Area 0 and advertise their directly connected network segments and loopback 1 addresses.
- Configure all interfaces that are not running OSPF as silent interfaces to prohibit these interfaces from receiving and sending OSPF packets. This configuration enhances OSPF networking adaptability and reduces system resource consumption.
- Set the OSPF network type to point-to-point (P2P) on the interconnected main interfaces using IP addresses with 31-bit subnet masks.
- Configure synchronization between LDP and OSPF to prevent traffic loss caused by a primary/backup LSP switchover.
Configuring OSPF
Context
Configuring OSPF ensures that there are reachable public network routes between UPE devices and SPE devices.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
router id 172.16.0.5 //Configure a router ID. # interface Eth-Trunk4 ospf network-type p2p //Set the OSPF network type to P2P on the interconnected main interface using IP addresses with 31-bit subnet masks. # interface Eth-Trunk5 ospf network-type p2p # interface Eth-Trunk17 ospf network-type p2p # interface XGigabitEthernet6/0/4 ospf network-type p2p # ospf 1 silent-interface all //Disable all interfaces from sending and receiving OSPF packets. undo silent-interface Eth-Trunk4 //Enable the interface to send and receive OSPF packets. undo silent-interface Eth-Trunk5 undo silent-interface Eth-Trunk17 undo silent-interface XGigabitEthernet6/0/4 spf-schedule-interval millisecond 10 //Set the route calculation interval to 10 ms to speed up route convergence. lsa-originate-interval 0 //Set the interval for updating LSAs to 0. lsa-arrival-interval 0 //Set the interval for receiving LSAs to 0. In this way, the changes of the topology or routes can be detected immediately, speeding up route convergence. graceful-restart period 600 //Enable OSPF GR. flooding-control //Restrict the flooding of updated LSAs to maintain the stability of OSPF neighbor relationships. area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#NInJJ<oF9VXb:BS~~9+JT'suROXkVHNG@8+*3FyB%^%# //Set the OSPF area authentication mode and password. network 172.16.0.5 0.0.0.0 network 172.17.4.2 0.0.0.0 network 172.17.4.8 0.0.0.0 network 172.17.4.10 0.0.0.0 network 172.17.10.2 0.0.0.0 #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
router id 172.16.2.51 # interface Eth-Trunk7 ospf network-type p2p # interface Eth-Trunk17 ospf network-type p2p # ospf 1 silent-interface all undo silent-interface Eth-Trunk7 undo silent-interface Eth-Trunk17 graceful-restart period 600 bandwidth-reference 100000 //Set the bandwidth reference value for calculating interface costs. flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#nU!dUe#c'J!;/%*WtZxQ<gP:'zx_E2OQnML]q;s#%^%# network 172.16.2.51 0.0.0.0 network 172.17.4.11 0.0.0.0 network 172.17.4.14 0.0.0.0 #
Verifying the Deployment
Run the display ospf peer command to check OSPF neighbor information. The following uses the command output of Core_SPE1 as an example. If the Full field displays Full, the OSPF neighbor relationship has been established.
[Core_SPE1]display ospf peer OSPF Process 1 with Router ID 172.16.0.5 Neighbors Area 0.0.0.0 interface 172.17.4.8(Eth-Trunk4)'s neighbors Router ID: 172.16.0.3 Address: 172.17.4.9 GR State: Normal State: Full Mode:Nbr is Slave Priority: 1 DR: None BDR: None MTU: 0 Dead timer due in 40 sec Retrans timer interval: 4 Neighbor is up for 00:53:42 Authentication Sequence: [ 0 ] Neighbors Area 0.0.0.0 interface 172.17.4.2(Eth-Trunk5)'s neighbors Router ID: 172.16.0.4 Address: 172.17.4.3 GR State: Normal State: Full Mode:Nbr is Master Priority: 1 DR: None BDR: None MTU: 0 Dead timer due in 37 sec Retrans timer interval: 4 Neighbor is up for 00:53:22 Authentication Sequence: [ 0 ] Neighbors Area 0.0.0.0 interface 172.17.4.10(Eth-Trunk17)'s neighbors Router ID: 172.16.2.51 Address: 172.17.4.11 GR State: Normal State: Full Mode:Nbr is Slave Priority: 1 DR: None BDR: None MTU: 0 Dead timer due in 31 sec Retrans timer interval: 4 Neighbor is up for 00:53:34 Authentication Sequence: [ 0 ] Neighbors Area 0.0.0.0 interface 172.17.10.2(XGigabitEthernet6/0/4)'s neighbors Router ID: 172.16.2.86 Address: 172.17.10.3 GR State: Normal State: Full Mode:Nbr is Master Priority: 1 DR: None BDR: None MTU: 0 Dead timer due in 32 sec Retrans timer interval: 5 Neighbor is up for 00:53:42 Authentication Sequence: [ 0 ]
Deploying MPLS LDP
Deployment Roadmap
Deployment Roadmap
The deployment roadmap is as follows:
- Configure LSR IDs and enable MPLS LDP globally and on each interface.
- Configure synchronization between LDP and OSPF to prevent traffic loss caused by a primary/backup LSP switchover.
- Configure LDP GR to ensure uninterrupted traffic forwarding during a primary/backup switchover or protocol restart.
- Configure BFD for LSP to quickly detect LDP LSP faults on the core ring.
Data Plan
The data provided in this section is used as an example, which may vary depending on the network scale and topology.
Plan data before configuring MPLS LDP.
Parameter |
Value |
Remarks |
---|---|---|
mpls lsr-id |
Specifying the IP address of Loopback 1 on an LSR as an LSR ID |
LSR IDs must be set before other MPLS commands are run. |
label advertise |
non-null |
Penultimate hop popping (PHP) cannot be configured. Otherwise, the switchover performance is affected. |
bfd bind ldp-lsp |
discriminator local discriminator remote detect-multiplier min-tx-interval min-rx-interval process-pst |
This command configures static BFD for LDP LSPs. The local discriminator of the local end must be the remote discriminator of the remote end. The local BFD detection multiplier can be adjusted. The minimum interval at which BFD packets are sent and received must be set to 3.3 ms. To speed up a traffic switchover, associate a BFD session with the port state table (PST). |
Enabling MPLS LDP
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
mpls lsr-id 172.16.0.5 //Set an MPLS LSR ID. Using a loopback interface address is recommended. mpls //Enable MPLS globally. label advertise non-null //Disable PHP and enable the egress node to assign labels to the penultimate hop. # mpls ldp //Enable MPLS LDP globally. # interface Eth-Trunk4 mpls mpls ldp //Enable MPLS LDP on the interface. # interface Eth-Trunk5 mpls mpls ldp //Enable MPLS LDP on the interface. # interface Eth-Trunk17 mpls mpls ldp //Enable MPLS LDP on the interface. # interface XGigabitEthernet6/0/4 mpls mpls ldp //Enable MPLS LDP on the interface. #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
mpls lsr-id 172.16.2.51 //Set an MPLS LSR ID. Using a loopback interface address is recommended. mpls //Enable MPLS globally. label advertise non-null //Disable PHP and enable the egress node to assign labels to the penultimate hop. # mpls ldp //Enable MPLS LDP globally. # interface Eth-Trunk7 mpls mpls ldp //Enable MPLS LDP on the interface. # interface Eth-Trunk17 mpls mpls ldp //Enable MPLS LDP on the interface. #
Verifying the Deployment
Run the display mpls ldp session all command to check the MPLS LDP session status. The following uses the command output of Core_SPE1 as an example. If the Status field displays Operational, the MPLS LDP session has been established.
[Core_SPE1]display mpls ldp session all LDP Session(s) in Public Network Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM) A '*' before a session means the session is being deleted. ------------------------------------------------------------------------------ PeerID Status LAM SsnRole SsnAge KASent/Rcv ------------------------------------------------------------------------------ 172.16.0.3:0 Operational DU Passive 0000:00:56 226/226 172.16.0.4:0 Operational DU Active 0000:00:56 226/226 172.16.2.51:0 Operational DU Passive 0000:00:55 223/223 172.16.2.86:0 Operational DU Passive 0000:00:55 223/223 ------------------------------------------------------------------------------ TOTAL: 4 session(s) Found.
Configuring Synchronization Between LDP and OSPF
Context
LDP LSRs set up LSPs using OSPF. If the LDP session on the primary link fails (not caused by a link failure) or the primary link recovers from a failure, synchronization between LDP and OSPF can be configured to prevent traffic loss caused by a primary/backup LSP switchover.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
interface Eth-Trunk4 ospf ldp-sync //Enable synchronization between LDP and OSPF on the interface. ospf timer ldp-sync hold-down 20 //Set the interval during which the interface waits for creating an LDP session before establishing an OSPF neighbor relationship. # interface Eth-Trunk5 ospf ldp-sync ospf timer ldp-sync hold-down 20 # interface Eth-Trunk17 ospf ldp-sync ospf timer ldp-sync hold-down 20 # interface XGigabitEthernet6/0/4 ospf ldp-sync ospf timer ldp-sync hold-down 20 #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
interface Eth-Trunk7 ospf ldp-sync ospf timer ldp-sync hold-down 20 # interface Eth-Trunk17 ospf ldp-sync ospf timer ldp-sync hold-down 20 #
Configuring LDP GR
Context
LDP graceful restart (GR) ensures uninterrupted traffic forwarding during a primary/backup switchover or protocol restart.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
mpls ldp graceful-restart //Enable LDP GR. #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
mpls ldp graceful-restart #
Configuring BFD for LSPs
Context
To improve the reliability of LDP LSPs between SPE devices on the core ring, configure static BFD for LDP LSPs to rapidly detect faults of LDP LSPs.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
bfd SPE1toSPE2 bind ldp-lsp peer-ip 172.16.0.3 nexthop 172.17.4.9 interface Eth-Trunk4 //Enable static BFD to monitor the LDP LSP between Core_SPE1 and Core_SPE2. discriminator local 317 //Specify the local discriminator. The local discriminator on the local end must be the same as the remote discriminator on the remote end. discriminator remote 137 //Specify a remote discriminator. detect-multiplier 8 //Specify the local BFD detection multiplier. min-tx-interval 3 //Set the minimum interval at which the local device sends BFD packets to 3.3 ms. min-rx-interval 3 //Set the minimum interval at which the local device receives BFD packets to 3.3 ms. process-pst //Enable the system to modify the port status table (PST) when the BFD session status changes, so as to speed up the switchover. commit //Commit the BFD session configuration. # bfd SPE1toSPE3 bind ldp-lsp peer-ip 172.16.0.4 nexthop 172.17.4.3 interface Eth-Trunk5 //Enable static BFD to monitor the LDP LSP between Core_SPE1 and Core_SPE3. discriminator local 32 discriminator remote 23 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit #
Verifying the Deployment
Run the display bfd session all for-lsp command to check the BFD for LSP session status. The following uses the command output of Core_SPE1 as an example. If the BFD session status is Up and the type is S_LDP_LSP on Core_SPE1, the BFD for LSP session has been successfully established.
[Core_SPE1]display bfd session all for-lsp -------------------------------------------------------------------------------- Local Remote PeerIpAddr State Type InterfaceName -------------------------------------------------------------------------------- 32 23 172.16.0.4 Up S_LDP_LSP Eth-Trunk4 317 137 172.16.0.3 Up S_LDP_LSP Eth-Trunk5 -------------------------------------------------------------------------------- Total UP/DOWN Session Number : 2/0
Deploying MPLS TE
Deployment Roadmap
The deployment roadmap is as follows:
- Enable MPLS TE.
- Enable MPLS, MPLS TE, and MPLS TE Constrained Shortest Path First (CSPF) globally on each node along TE tunnels, and deploy MPLS and MPLS TE on the interfaces along the TE tunnels.
- Configure tunnel paths, enable each node to use primary and backup TE tunnels, and configure primary and hot-standby CR-LSPs using the affinity attribute.
- Create L3VPN service tunnels.
Create primary tunnels.
- Establish a primary tunnel TE1 between Site2_UPE3 and Core_SPE2. Specify path 1 as the primary CR-LSP and path 2 as the hot-standby CR-LSP.
- Establish a primary tunnel TE3 between Site2_UPE4 and Core_SPE3. Specify path 5 as the primary CR-LSP and path 6 as the hot-standby CR-LSP.
Create backup tunnels.
- Establish a backup tunnel TE2 between Site2_UPE3 and Core_SPE3, which is the backup tunnel of the primary tunnel TE1. Specify path 3 as the primary CR-LSP and path 4 as the hot-standby CR-LSP.
- Establish a backup tunnel TE4 between Site2_UPE4 and Core_SPE2, which is the backup tunnel of the primary tunnel TE3. Specify path 7 as the primary CR-LSP and path 8 as the hot-standby CR-LSP.
Configure Resource Reservation Protocol (RSVP) GR.
Enable RSVP GR on all devices to prevent network interruptions caused by an active/standby switchover of RSVP nodes and restore dynamic CR-LSPs.
Configure BFD for CR-LSPs.
Configure static BFD for CR-LSPs on all devices to speed up the switchover between the primary and hot-standby CR-LSPs.
Create a tunnel policy.
Configure TE tunnels to be preferentially selected.
Data Plan
The data provided in this section is used as an example, which may vary depending on the network scale and topology.
Parameter |
Value |
Remarks |
---|---|---|
mpls te |
- |
Enable MPLS TE. |
mpls rsvp-te |
- |
Enable MPLS RSVP-TE. |
mpls rsvp-te hello |
- |
Enable the RSVP Hello extension mechanism. |
mpls rsvp-te hello full-gr |
- |
Enable the RSVP GR capability and RSVP GR Helper capability on a GR node. |
mpls te cspf |
- |
Enable the MPLS TE CSPF algorithm. |
Parameter |
Value |
Remarks |
---|---|---|
interface Tunnel |
Number of a tunnel interface |
To facilitate maintenance, it is recommended that tunnel IDs be associated with device names and that descriptions be added for tunnel interfaces. |
ip address unnumbered |
interface LoopBack1 |
Configure a tunnel interface to borrow the IP address of loopback 1. |
tunnel-protocol |
mpls te |
Enable the TE tunnel function. |
destination |
Loopback 1 address of the remote device |
Specify a destination IP address. |
mpls te tunnel-id |
Tunnel ID |
Set a tunnel ID. |
mpls te affinity property |
Affinity attribute for the primary and hot-standby CR-LSPs based on the administrative group attributes of links |
- |
mpls te backup |
hot-standby |
Set the backup mode of a tunnel to hot-standby. |
bfd bind mpls-te interface Tunnel te-lsp |
discriminator local discriminator remote detect-multiplier min-tx-interval min-rx-interval process-pst |
Configure static BFD to detect the hot-standby CR-LSP of a TE tunnel. Set the local discriminator of the local end to be the same as the remote discriminator of the remote end, and adjust the local detection multiplier of BFD. Set the minimum interval at which BFD packets are sent and received to 3.3 ms. Associate a BFD session with the PST to speed up a traffic switchover. |
bfd bind mpls-te interface Tunnel |
discriminator local discriminator remote detect-multiplier min-tx-interval min-rx-interval process-pst |
Configure static BFD to detect the primary CR-LSP of a TE tunnel. Set the local discriminator of the local end to be the same as the remote discriminator of the remote end, and adjust the local detection multiplier of BFD. Set the minimum interval at which BFD packets are sent and received to 3.3 ms. Associate a BFD session with the PST to speed up a traffic switchover. |
tunnel-policy |
Tunnel policy name: TSel tunnel select-seq cr-lsp lsp load-balance-number 1 Tunnel policy on a core device: TE tunnel select-seq cr-lsp load-balance-number 1 |
Configure tunnel policies for preferentially selecting CR-LSPs. |
Tunnel |
Tunnel Interface |
Tunnel ID |
---|---|---|
Core_SPE1 to Site1_UPE1 Site1_UPE1 to Core_SPE1 |
Tunnel611 |
71 |
Core_SPE1 to Site1_UPE2 Site1_UPE2 to Core_SPE1 |
Tunnel 622 |
82 |
Core_SPE1 to Site3_UPE5 Site3_UPE5 to Core_SPE1 |
Tunnel 721 |
312 |
Core_SPE1 to Site3_UPE6 Site3_UPE6 to Core_SPE1 |
Tunnel 711 |
311 |
Core_SPE2 to Site2_UPE3 Site2_UPE3 to Core_SPE2 |
Tunnel 111 |
111 |
Core_SPE2 to Site2_UPE4 Site2_UPE4 to Core_SPE2 |
Tunnel 121 |
121 |
Core_SPE2 to Site1_UPE1 Site1_UPE1 to Core_SPE2 |
Tunnel 612 |
72 |
Core_SPE2 to Site1_UPE2 Site1_UPE2 to Core_SPE2 |
Tunnel 621 |
81 |
Core_SPE3 to Site2_UPE3 Site2_UPE3 to Core_SPE3 |
Tunnel 112 |
112 |
Core_SPE3 to Site2_UPE4 Site2_UPE4 to Core_SPE3 |
Tunnel 122 |
122 |
Core_SPE3 to Site3_UPE5 Site3_UPE5 to Core_SPE3 |
Tunnel 722 |
322 |
Core_SPE3 to Site3_UPE6 Site3_UPE6 to Core_SPE3 |
Tunnel 712 |
321 |
Configuring MPLS TE Tunnels and Hot Standby
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
mpls mpls te //Enable MPLS TE globally. mpls rsvp-te //Enable RSVP-TE. mpls te cspf //Enable the CSPF algorithm. # interface Eth-Trunk4 mpls te //Enable MPLS TE on the interface. mpls te link administrative group c //Configure an administrative group attribute for selecting the primary and backup paths of a TE tunnel. mpls rsvp-te //Enable RSVP-TE on the interface. # interface Eth-Trunk5 mpls te mpls te link administrative group 30 mpls rsvp-te # interface Eth-Trunk17 mpls te mpls te link administrative group 4 mpls rsvp-te # interface XGigabitEthernet6/0/4 mpls te mpls te link administrative group 20 mpls rsvp-te # ospf 1 opaque-capability enable //Enable the Opaque LSA capability. area 0.0.0.0 mpls-te enable //Enable MPLS TE in the current OSPF area. # interface Tunnel611 //Specify the tunnel from Core_SPE1 to Site1_UPE1. description Core_SPE1 to Site1_UPE1 //Configure the interface description. ip address unnumbered interface LoopBack1 //Configure a tunnel interface to borrow the IP address of loopback 1. tunnel-protocol mpls te //Set the tunneling protocol to MPLS TE. destination 172.16.2.51 //Configure the IP address of Site1_UPE1 as the tunnel destination IP address. mpls te tunnel-id 71 //Set a tunnel ID, which must be valid and unique on the local device. mpls te record-route //Configure the tunnel to record detailed route information for maintenance. mpls te affinity property 4 mask 4 //Configure the affinity attribute of the primary CR-LSP for selecting the optimal forwarding path. mpls te affinity property 8 mask 8 secondary //Configure the affinity attribute of the hot-standby CR-LSP. mpls te backup hot-standby //Set the backup mode of the tunnel to hot-standby mode. mpls te commit //Commit all the MPLS TE configuration of the tunnel for the configuration to take effect. # interface Tunnel622 description Core_SPE1 to Site1_UPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.50 mpls te tunnel-id 82 mpls te record-route mpls te affinity property 8 mask 8 mpls te affinity property 4 mask 4 secondary mpls te backup hot-standby mpls te commit # interface Tunnel711 description Core_SPE1 to Site3_UPE6 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.86 mpls te tunnel-id 311 mpls te record-route mpls te affinity property 20 mask 20 mpls te affinity property 10 mask 10 secondary mpls te backup hot-standby mpls te commit # interface Tunnel721 description Core_SPE1 to Site3_UPE5 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.87 mpls te tunnel-id 312 mpls te record-route mpls te affinity property 10 mask 10 mpls te affinity property 20 mask 20 secondary mpls te backup hot-standby mpls te commit # tunnel-policy TSel //Configure a tunnel policy. tunnel select-seq cr-lsp lsp load-balance-number 1 //Configure CR-LSPs to be preferentially selected. # tunnel-policy TE tunnel select-seq cr-lsp load-balance-number 1 #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
mpls mpls te //Enable MPLS TE globally. mpls rsvp-te //Enable RSVP-TE. mpls te cspf //Enable the CSPF algorithm. # interface Eth-Trunk7 mpls te //Enable MPLS TE on the interface. mpls te link administrative group c //Configure an administrative group attribute for selecting the primary and backup paths of a TE tunnel. mpls rsvp-te //Enable RSVP-TE on the interface. # interface Eth-Trunk17 mpls te mpls te link administrative group 4 mpls rsvp-te # ospf 1 opaque-capability enable //Enable the Opaque LSA capability. area 0.0.0.0 mpls-te enable //Enable MPLS TE in the current OSPF area. # interface Tunnel611 //Specify the tunnel from Site1_UPE1 to Core_SPE1. description Site1_UPE1 to Core_SPE1 //Configure the interface description. ip address unnumbered interface LoopBack1 //Configure a tunnel interface to borrow the IP address of loopback 1. tunnel-protocol mpls te //Set the tunneling protocol to MPLS TE. destination 172.16.0.5 //Configure the IP address of Core_SPE1 as the tunnel destination IP address. mpls te tunnel-id 71 //Set a tunnel ID, which must be valid and unique on the local device. mpls te record-route //Configure the tunnel to record detailed route information for maintenance. mpls te affinity property 4 mask 4 //Configure the affinity attribute of the primary CR-LSP for selecting the optimal forwarding path. mpls te affinity property 8 mask 8 secondary //Configure the affinity attribute of the hot-standby CR-LSP. mpls te backup hot-standby //Set the backup mode of the tunnel to hot-standby mode. mpls te commit //Commit all the MPLS TE configuration of the tunnel for the configuration to take effect. # interface Tunnel612 description Site1_UPE1 to Core_SPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.3 mpls te tunnel-id 72 mpls te record-route mpls te affinity property 4 mask 4 mpls te affinity property 8 mask 8 secondary mpls te backup hot-standby mpls te commit # tunnel-policy TSel //Configure a tunnel policy. tunnel select-seq cr-lsp lsp load-balance-number 1 //Configure CR-LSPs to be preferentially selected. #
Verifying the Deployment
Run the display mpls te tunnel-interface Tunnel command to check tunnel interface information on a local node.
The following uses Tunnel 611 from Core_SPE1 to Site1_UPE1 as an example. If both the primary and hot-standby LSPs of Tunnel 611 are in UP state, the primary and hot-standby LSPs have been established successfully.
[Core_SPE1]display mpls te tunnel-interface Tunnel611 ---------------------------------------------------------------- Tunnel611 ---------------------------------------------------------------- Tunnel State Desc : UP Active LSP : Primary LSP Session ID : 71 Ingress LSR ID : 172.16.0.5 Egress LSR ID: 172.16.2.51 Admin State : UP Oper State : UP Primary LSP State : UP Main LSP State : READY LSP ID : 1 Hot-Standby LSP State : UP Main LSP State : READY LSP ID : 32772
Run the display mpls te hot-standby state all command to check the status of all hot-standby tunnels.
The following uses Core_SPE1 as an example. If all hot-standby tunnels are in Primary LSP state, traffic has been switched to primary CR-LSPs.
[Core_SPE1]display mpls te hot-standby state all --------------------------------------------------------------------- No. tunnel name session id switch result --------------------------------------------------------------------- 1 Tunnel611 71 Primary LSP 2 Tunnel622 82 Primary LSP 3 Tunnel711 311 Primary LSP 4 Tunnel721 312 Primary LSP
Run the ping lsp te tunnel command to check the bidirectional connectivity of the primary and backup TE tunnels of each device.
The following uses Tunnel 611 from Core_SPE1 to Site1_UPE1 as an example. Run the following commands on both ends of the TE tunnel.
[Core_SPE1] ping lsp te Tunnel611 LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 : 100 data bytes, press CTRL_C to break Reply from 172.16.2.51: bytes=100 Sequence=1 time=5 ms Reply from 172.16.2.51: bytes=100 Sequence=2 time=3 ms Reply from 172.16.2.51: bytes=100 Sequence=3 time=3 ms Reply from 172.16.2.51: bytes=100 Sequence=4 time=2 ms Reply from 172.16.2.51: bytes=100 Sequence=5 time=3 ms --- FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 2/3/5 ms
[Core_SPE1] ping lsp te Tunnel611 hot-standby LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 : 100 data bytes, press CTRL_C to break Reply from 172.16.2.51: bytes=100 Sequence=1 time=2 ms Reply from 172.16.2.51: bytes=100 Sequence=2 time=2 ms Reply from 172.16.2.51: bytes=100 Sequence=3 time=3 ms Reply from 172.16.2.51: bytes=100 Sequence=4 time=2 ms Reply from 172.16.2.51: bytes=100 Sequence=5 time=3 ms --- FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 2/2/3 ms
Run the tracert lsp te Tunnel command to detect LSPs.
The following uses Tunnel 611 from Core_SPE1 to Site1_UPE1 as an example. Ensure that the primary and hot-standby tunnel paths are different.
[Core_SPE1]tracert lsp te Tunnel611 LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 , press CTRL_C to break. TTL Replier Time Type Downstream 0 Ingress 172.17.4.11/[1078 ] 1 172.16.2.51 3 ms Egress
[Core_SPE1]tracert lsp te Tunnel611 hot-standby LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel611 , press CTRL_C to break. TTL Replier Time Type Downstream 0 Ingress 172.17.4.9/[1391 ] 1 172.17.4.9 3 ms Transit 172.17.4.13/[1169 ] 2 172.17.4.13 7 ms Transit 172.17.4.14/[1109 ] 3 172.16.2.51 4 ms Egress
Configuring RSVP GR
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
mpls mpls rsvp-te hello //Enable the RSVP Hello extension function globally. mpls rsvp-te hello full-gr //Enable the RSVP GR capability and RSVP GR Helper capability. # interface Eth-Trunk4 mpls rsvp-te hello //Enable the RSVP Hello extension function on the interface. # interface Eth-Trunk5 mpls rsvp-te hello # interface Eth-Trunk17 mpls rsvp-te hello # interface XGigabitEthernet6/0/4 mpls rsvp-te hello #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
mpls mpls rsvp-te hello //Enable the RSVP Hello extension function globally. mpls rsvp-te hello full-gr //Enable the RSVP GR capability and RSVP GR Helper capability. # interface Eth-Trunk7 mpls rsvp-te hello //Enable the RSVP Hello extension function on the interface. # interface Eth-Trunk17 mpls rsvp-te hello #
Configuring BFD for CR-LSPs
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
bfd SPE1toUPE1_b bind mpls-te interface Tunnel611 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 611. discriminator local 6116 //Specify the local discriminator. The local discriminator on the local end must be the same as the remote discriminator on the remote end. discriminator remote 6115 //Specify a remote discriminator. detect-multiplier 8 //Specify the local BFD detection multiplier. min-tx-interval 3 //Set the minimum interval at which the local device sends BFD packets to 3.3 ms. min-rx-interval 3 //Set the minimum interval at which the local device receives BFD packets to 3.3 ms. process-pst //Enable the system to modify the PST when the BFD session status changes, so as to speed up the switchover. commit //Commit the BFD session configuration. # bfd SPE1toUPE1_m bind mpls-te interface Tunnel611 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 611. discriminator local 6112 discriminator remote 6111 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE2_b bind mpls-te interface Tunnel622 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 622. discriminator local 6226 discriminator remote 6225 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE2_m bind mpls-te interface Tunnel622 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 622. discriminator local 6222 discriminator remote 6221 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE5_b bind mpls-te interface Tunnel721 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 721. discriminator local 7216 discriminator remote 7215 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE5_m bind mpls-te interface Tunnel721 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 721. discriminator local 7212 discriminator remote 7211 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE6_b bind mpls-te interface Tunnel711 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 711. discriminator local 7116 discriminator remote 7115 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE6_m bind mpls-te interface Tunnel711 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 711. discriminator local 7112 discriminator remote 7111 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
bfd UPE1toSPE1_m_b bind mpls-te interface Tunnel611 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 611. discriminator local 6115 //Specify the local discriminator. The local discriminator on the local end must be the same as the remote discriminator on the remote end. discriminator remote 6116 //Specify a remote discriminator. detect-multiplier 8 //Specify the local BFD detection multiplier. min-tx-interval 3 //Set the minimum interval at which the local device sends BFD packets to 3.3 ms. min-rx-interval 3 //Set the minimum interval at which the local device receives BFD packets to 3.3 ms. process-pst //Enable the system to modify the PST when the BFD session status changes, so as to speed up the switchover. commit //Commit the BFD session configuration. # bfd UPE1toSPE1_m bind mpls-te interface Tunnel611 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 611. discriminator local 6111 discriminator remote 6112 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE1toSPE2_b bind mpls-te interface Tunnel612 te-lsp backup //Enable static BFD to detect the hot-standby CR-LSP of Tunnel 612. discriminator local 6125 discriminator remote 6126 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE1toSPE2_m bind mpls-te interface Tunnel612 te-lsp //Enable static BFD to detect the primary CR-LSP of Tunnel 612. discriminator local 6121 discriminator remote 6122 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit #
Verifying the Deployment
Run the display bfd session all for-te command to check the BFD session status.
The following uses the command output of Core_SPE1 as an example. If the BFD sessions that monitor tunnels of the S_TE_LSP type are in Up state, BFD sessions have been established successfully.
[Core_SPE1]display bfd session all for-te -------------------------------------------------------------------------------- Local Remote PeerIpAddr State Type InterfaceName -------------------------------------------------------------------------------- 7112 7111 172.16.2.86 Up S_TE_LSP Tunnel711 7212 7211 172.16.2.87 Up S_TE_LSP Tunnel721 7216 7215 172.16.2.87 Up S_TE_LSP Tunnel721 7116 7115 172.16.2.86 Up S_TE_LSP Tunnel711 6226 6225 172.16.2.50 Up S_TE_LSP Tunnel622 6116 6115 172.16.2.51 Up S_TE_LSP Tunnel611 6112 6111 172.16.2.51 Up S_TE_LSP Tunnel611 6222 6221 172.16.2.50 Up S_TE_LSP Tunnel622 -------------------------------------------------------------------------------- Total UP/DOWN Session Number : 8/0
Deploying L3VPN Services and Protection (HoVPN)
Deployment Roadmap
On a subway bearer network, IP tunnels between nodes need to be established to transmit L3VPN services. For example, establish a hierarchical L3VPN tunnel from Site1_UPE1 to Site2_UPE3 to transmit IP data services between Site1 and Site2, as shown in Figure 2-103.
The deployment roadmap is as follows:
Deploy MP-BGP.
- Establish Multiprotocol Interior Border Gateway Protocol (MP-IBGP) peer relationships between UPE and SPE devices, and between SPE devices.
- Plan a route target (RT) to make traffic from UPE devices to SPE devices be transmitted by default routes and traffic from SPE devices to UPE devices be transmitted by specific routes.
- Configure a routing policy to ensure that traffic from a specific UPE device to other sites is preferentially forwarded by the SPE device directly connected to the UPE device.
- Configure a routing policy to ensure that traffic from a specific SPE device to other sites is preferentially forwarded by the UPE device directly connected to the SPE device.
- Configure a route filtering policy to prevent a specific SPE device at a site from advertising ARP Vlink direct routes to UPE devices at other sites.
- Configure a route filtering policy to prevent a specific SPE device from receiving routes of sites directly connected to this SPE device from other SPE devices. If an SPE device receives such routes from other SPE devices, routing loops may occur. For example, prevent Core_SPE2 from receiving any routes of Site1 from Core_SPE1 or any routes of Site2 from Core_SPE3.
Deploy VPN services.
- Deploy VPN instances on UPE devices and SPE devices, and bind interfaces to the VPN instances on UPE devices but not on SPE devices.
- Preferentially use TE tunnels to transmit VPN services on UPE devices. In hybrid FRR mode, LSPs can be used to transmit VPN services.
- Configure a tunnel selector on an SPE device to enable the SPE device to select any tunnel policy when the next-hop address prefix of a VPNv4 route is the IP address prefix of another SPE device and to select a TE tunnel in other scenarios.
- Deploy VRRP on two UPE devices at a site, and configure the UPE devices to advertise ARP Vlink direct routes to their connected SPE devices so that the SPE devices select the optimal route to send packets to CE devices.
Deploy reliability protection.
- Deploy VRRP on two UPE devices at a site to implement gateway backup and ensure reliability of uplink traffic on CE devices. Configure backup devices to forward service traffic, minimizing the impact of VRRP switchovers on services.
- Deploy VPN FRR on UPE devices. If the TE tunnel between a UPE device and an SPE device is faulty, traffic is automatically switched to the TE tunnel between the UPE device and another SPE device at the same site, minimizing the impact on VPN services.
- Deploy VPN FRR on an SPE device. If the SPE device is faulty, VPN services are switched to another SPE device, implementing a fast E2E switchover of VPN services.
- Deploy VPN FRR on an SPE device. If the TE tunnel between an SPE device and a UPE device is faulty, traffic is automatically switched to the TE tunnel between the SPE device and another UPE device at the same site, minimizing the impact on VPN services.
- Deploy IP + VPN hybrid FRR on UPE devices. If the interface of a UPE device detects a fault on the link between the UPE device and its connected CE device, the UPE device quickly switches traffic to its remote UPE device, which then forwards the traffic to the CE device.
- Deploy VPN GR on all UPE devices and SPE devices to ensure uninterrupted VPN traffic forwarding during an active/standby switchover on the device that is transmitting VPN services.
Data Plan
The data provided in this section is used as an example, which may vary depending on the network scale and topology.
NE Role |
Value |
Remarks |
---|---|---|
Site1_UPE1 |
interface XGigabitEthernet1/0/4.200: 172.18.200.66/26 |
- |
Site1_UPE2 |
interface XGigabitEthernet1/0/4.200: 172.18.200.67/26 |
- |
Site2_UPE3 |
interface XGigabitEthernet0/0/2.150: 172.18.150.2/26 |
- |
Site2_UPE4 |
interface XGigabitEthernet0/0/2.150: 172.18.150.3/26 |
- |
Site3_UPE5 |
interface XGigabitEthernet0/0/2.100: 172.18.100.2/26 |
- |
Site3_UPE6 |
interface XGigabitEthernet0/0/2.100: 172.18.100.3/26 |
- |
Parameter |
Value |
Remarks |
---|---|---|
VPN instance name |
vpna |
- |
Route distinguisher (RD) |
UPE: 1:1 Core_SPE1: 5:1 Core_SPE2: 3:1 Core_SPE3: 4:1 |
In this solution, it is recommended that the same RD value be set on UPE and SPE devices. If different RD values are set, to make VPN FRR take effect, run the vpn-route cross multipath command to add multiple VPNv4 routes to a VPN instance with a different RD value from these routes' RD values. |
RT |
0:1 |
Plan the same RT on the entire network. |
Parameter |
Core_SPE1 |
Core_SPE2 |
Core_SPE3 |
Site1_UPE1 |
Site1_UPE2 |
Site2_UPE3 |
Site2_UPE4 |
Site3_UPE5 |
Site3_UPE6 |
---|---|---|---|---|---|---|---|---|---|
BGP process ID |
65000 |
65000 |
65000 |
65000 |
65000 |
65000 |
65000 |
65000 |
65000 |
Router ID |
172.16.0.5 |
172.16.0.3 |
172.16.0.4 |
172.16.2.51 |
172.16.2.50 |
172.16.2.75 |
172.16.2.76 |
172.16.2.87 |
172.16.2.86 |
Peer group |
devCore: 172.16.0.3, 172.16.0.4 devHost: 172.16.2.50, 172.16.2.51, 172.16.2.86, 172.16.2.87 |
devCore: 172.16.0.4, 172.16.0.5 devHost: 172.16.2.50, 172.16.2.51, 172.16.2.75, 172.16.2.76 |
devCore: 172.16.0.3, 172.16.0.5 devHost: 172.16.2.75, 172.16.2.76, 172.16.2.86, 172.16.2.87 |
devCore: 172.16.0.3, 172.16.0.5 devHost: 172.16.2.50 |
devCore: 172.16.0.3, 172.16.0.5 devHost: 172.16.2.51 |
devCore: 172.16.0.3, 172.16.0.4 devHost: 172.16.2.76 |
devCore: 172.16.0.3, 172.16.0.4 devHost: 172.16.2.75 |
devCore: 172.16.0.4, 172.16.0.5 devHost: 172.16.2.86 |
devCore: 172.16.0.4, 172.16.0.5 devHost: 172.16.2.87 |
policy vpn-target |
Enable |
Enable |
Enable |
Enable |
Enable |
Enable |
Enable |
Enable |
Enable |
Tunnel selector |
Deploy |
Deploy |
Deploy |
- |
- |
- |
- |
- |
- |
Priority of peer routes |
- |
- |
- |
Increase the route priority of Core_SPE1 so that UPE devices always prefer the routes advertised by Core_SPE1. |
Increase the route priority of Core_SPE2 so that UPE devices always prefer the routes advertised by Core_SPE2. |
Increase the route priority of Core_SPE2 so that UPE devices always prefer the routes advertised by Core_SPE2. |
Increase the route priority of Core_SPE3 so that UPE devices always prefer the routes advertised by Core_SPE3. |
Increase the route priority of Core_SPE3 so that UPE devices always prefer the routes advertised by Core_SPE3. |
Increase the route priority of Core_SPE1 so that UPE devices always prefer the routes advertised by Core_SPE1. |
Configuring MP-BGP
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
tunnel-selector TSel permit node 9 if-match ip next-hop ip-prefix core_nhp //Configure a tunnel selector to enable Core_SPE1 to select any tunnel for route recursion when the next-hop address prefix of a VPNv4 route is the IP address prefix of another SPE. # tunnel-selector TSel permit node 10 //Configure a tunnel selector to allow the routes received from an IBGP peer to recurse to a TE tunnel if the routes need to be forwarded to another IBGP peer and the next hops of the routes need to be changed to the local IP address. apply tunnel-policy TE # bgp 65000 group devCore internal //Create an IBGP peer group. peer devCore connect-interface LoopBack1 //Specify loopback 1 and its address as the source interface and address of BGP messages. peer 172.16.0.3 as-number 65000 //Establish a peer relationship between SPE devices. peer 172.16.0.3 group devCore //Add Core_SPE1 to the peer group. peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.50 as-number 65000 peer 172.16.2.50 group devHost peer 172.16.2.51 as-number 65000 peer 172.16.2.51 group devHost peer 172.16.2.86 as-number 65000 peer 172.16.2.86 group devHost peer 172.16.2.87 as-number 65000 peer 172.16.2.87 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.50 enable undo peer 172.16.2.51 enable undo peer 172.16.0.3 enable undo peer 172.16.0.4 enable undo peer 172.16.2.86 enable undo peer 172.16.2.87 enable # ipv4-family vpnv4 policy vpn-target tunnel-selector TSel //Configure a tunnel selector to allow BGP VPNv4 routes sent to UPE devices to recurse to TE tunnels and BGP VPNv4 routes sent to other SPE devices to recurse to LSPs. This is because an SPE device advertises default routes to UPE devices, forwards routes of UPE devices to other SPE devices and changes the next hops of the UPE devices' routes to itself. peer devCore enable peer devCore route-policy core-import import //Configure Core_SPE1 to filter all routes of sites connected to itself when it receives routes from other SPE devices. peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer devHost enable peer devHost route-policy p_iBGP_RR_in import //Configure Core_SPE1 to filter host routes when receiving routes from UPE devices, set the preferred value of the routes received from its directly connected UPE devices to 300, and set the preferred value of the routes received from other UPE devices to 200. peer devHost advertise-community //Advertise community attributes to the peer group. peer devHost upe //Configure the peer devHost as a UPE device. peer devHost default-originate vpn-instance vpna //Configure Core_SPE1 to send the default routes of the VPN instance vpna to the UPE device devHost. peer 172.16.2.50 enable peer 172.16.2.50 group devHost peer 172.16.2.51 enable peer 172.16.2.51 group devHost peer 172.16.2.86 enable peer 172.16.2.86 group devHost peer 172.16.2.87 enable peer 172.16.2.87 group devHost # # route-policy p_iBGP_RR_in deny node 5 //Filter host routes of all sites. if-match ip-prefix deny_host if-match community-filter all_site # route-policy p_iBGP_RR_in permit node 11 //Set the preferred value of the routes received from its directly connected UPE devices to 300. if-match community-filter site1 apply preferred-value 300 # route-policy p_iBGP_RR_in permit node 12 //Set the preferred value of the routes received from indirectly connected UPE devices to 200. if-match community-filter site2 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 13 //Set the preferred value of the routes received from indirectly connected UPE devices to 200. if-match community-filter site3 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 20 //Permit all the other routes. # route-policy core-import deny node 5 //Deny all routes of sites directly connected to Core_SPE1. if-match community-filter site12 # route-policy core-import deny node 6 //Deny all routes of sites directly connected to Core_SPE1. if-match community-filter site13 # route-policy core-import permit node 10 //Permit all the other routes. # ip ip-prefix deny_host index 10 permit 0.0.0.0 0 greater-equal 32 less-equal 32 //Permit all 32-bit host routes and deny all the other routes. ip ip-prefix core_nhp index 10 permit 172.16.0.3 32 ip ip-prefix core_nhp index 20 permit 172.16.0.4 32 //Permit routes to 172.16.0.3/32 and 172.16.0.4/32 and deny all the other routes. # ip community-filter basic site1 permit 100:100 //Create a community attribute filter site1 and set the community attribute to 100:100. ip community-filter basic site2 permit 200:200 ip community-filter basic site3 permit 300:300 ip community-filter basic all_site permit 5720:5720 ip community-filter basic site12 permit 12:12 ip community-filter basic site13 permit 13:13 #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
bgp 65000 group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.50 as-number 65000 peer 172.16.2.50 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.50 enable undo peer 172.16.0.3 enable undo peer 172.16.0.5 enable # ipv4-family vpnv4 policy vpn-target peer devCore enable peer devCore route-policy p_iBGP_host_ex export //Configure the community attribute of routes advertised by Site1_UPE1 to SPE devices. peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.3 preferred-value 200 //Set the preferred value of the routes received from Core_SPE2 to 200. peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer 172.16.0.5 preferred-value 300 //Set the preferred value of the routes received from Core_SPE1 to 300 so that Site1_UPE1 always selects routes received from Core_SPE1. peer devHost enable peer devHost advertise-community peer 172.16.2.50 enable peer 172.16.2.50 group devHost # # route-policy p_iBGP_host_ex permit node 0 //Add the community attribute to routes. apply community 100:100 5720:5720 12:12 #
Verifying the Deployment
Run the display bgp vpnv4 all peer command to check the BGP VPNv4 peer relationship.
The following uses the command output of Core_SPE1 as an example. If the State field displays Established, BGP peer relationships have been established successfully.
[Core_SPE1]display bgp vpnv4 all peer BGP local router ID : 172.16.0.5 Local AS number : 65000 Total number of peers : 4 Peers in established state : 4 Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv 172.16.2.51 4 65000 2102 1859 0 20:55:17 Established 550 172.16.2.86 4 65000 3673 2989 0 0026h03m Established 550 172.16.0.3 4 65000 1659 1462 0 20:57:05 Established 200 172.16.0.4 4 65000 3421 2494 0 0026h03m Established 200
Configuring L3VPN
Context
VPN instances need to be configured to advertise VPNv4 routes and forward data to achieve communication over an L3VPN.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
ip vpn-instance vpna //Create a VPN instance vpna. ipv4-family route-distinguisher 5:1 //Configure an RD. tnl-policy TSel //Configure a TE tunnel for the VPN instance. vpn-target 0:1 export-extcommunity //Configure the VPN target extended community attribute. vpn-target 0:1 import-extcommunity # bgp 65000 # ipv4-family vpnv4 nexthop recursive-lookup delay 10 //Set the delay in responding to next-hop changes to 10s. route-select delay 120 //Set the route selection delay to 120s to prevent traffic interruptions caused by fast route switchback. # ipv4-family vpn-instance vpna default-route imported //Import default routes to the VPN instance vpna. nexthop recursive-lookup route-policy delay_policy //Configure BGP next-hop recursion based on the routing policy delay_policy. nexthop recursive-lookup delay 10 route-select delay 120 # route-policy delay_policy permit node 0 //Permit routes of all sites. if-match community-filter all_site #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
arp vlink-direct-route advertise //Configure Site1_UPE1 to advertise IPv4 ARP Vlink direct routes. # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # interface XGigabitEthernet1/0/4 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet1/0/4.200 dot1q termination vid 200 ip binding vpn-instance vpna //Bind the VPN instance vpna to the specific service interface. arp direct-route enable //Configure the ARP module to report ARP Vlink direct routes to the RM module. ip address 172.18.200.66 255.255.255.192 arp broadcast enable //Enable ARP broadcast on a VLAN tag termination sub-interface. # bgp 65000 # ipv4-family vpnv4 route-select delay 120 # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex //Import direct routes to the VPN instance vpna and add the community attribute. route-select delay 120 # # route-policy p_iBGP_RR_ex permit node 0 //Add the community attribute to routes. apply community 100:100 5720:5720 12:12 # arp expire-time 62640 //Set the aging time of dynamic ARP entries. arp static 172.18.200.68 00e0-fc00-0003 vid 200 interface XGigabitEthernet1/0/4.200 //Configure a static ARP entry. #
Since V200R010C00, dynamic ARP is supported to meet reliability requirements in this scenario. Perform the following operations to implement dynamic ARP:
- Run the arp learning passive enable command in the system view to enable passive ARP.
- Run the arp auto-scan enable command in the sub-interface view to enable ARP automatic scanning in the sub-interface view.
After the preceding configuration is complete, you do not need to configure the aging time of dynamic ARP entries or configure static ARP entries.
Configuring Reliability Protection
Deployment Roadmap
The deployment roadmap is as follows:
Deploy VRRP on two UPE devices at a site to ensure reliability for uplink traffic of CE devices. The following uses Site1 as an example, as shown in Figure 2-104:
Configure Site1_UPE1 as the master device and Site1_UPE2 as the backup device in a VRRP group. If Site1_UPE1 fails, the uplink traffic of CE1 can be rapidly switched to Site1_UPE2.
Configure BFD for VRRP so that BFD can quickly detect faults and instruct the VRRP backup device to become the new master device. In addition, hardware directly sends gratuitous ARP packets, to instruct access devices to forward traffic to the new master device.
Configure backup devices to forward service traffic. A device in the backup state can forward service traffic as long as it receives service traffic. This prevents service traffic loss and shortens the service interruption time if an aggregation device is faulty.
If the number of VRRP groups exceeds the default maximum value, run the set vrrp max-group-number max-group-number command on a UPE device to set the maximum number of supported VRRP groups.
Deploy VPN FRR on a UPE device. If the TE tunnel between the UPE device and an SPE device is faulty, traffic is automatically switched to the TE tunnel between the UPE device and another SPE device at the same site. The following uses Site1_UPE1 as an example, as shown in Figure 2-105.
Site1_UPE1 has two TE tunnels to Core_SPE1 and Core_SPE2 respectively. Deploying VPN FRR on Site1_UPE1 ensures that traffic is rapidly switched to Core_SPE2 if Core_SPE1 is faulty.
Deploy VPN FRR on an SPE device. If the SPE device is faulty, VPN services are switched to another SPE device, implementing a fast E2E switchover of VPN services. The following uses Core_SPE1 as an example, as shown in Figure 2-106.
Core_SPE1 has two LSPs to Core_SPE2 and Core_SPE3 respectively. Configuring VPN FRR on Core_SPE1 ensures that traffic is rapidly switched to Core_SPE3 if Core_SPE2 is faulty.
Deploy VPN FRR on an SPE device. If the TE tunnel between the SPE device and a UPE device is faulty, traffic is automatically switched to the TE tunnel between the SPE device and another UPE device at the same site. The following uses Core_SPE2 as an example, as shown in Figure 2-107:
Core_SPE2 has two TE tunnels to Site2_UPE3 and Site2_UPE4 respectively. Deploying VPN FRR on Core_SPE2 ensures that traffic is rapidly switched to Site2_UPE4 if Site2_UPE3 is faulty.
Deploy IP + VPN hybrid FRR on UPE devices. If the interface of a UPE device detects a fault on the link between the UPE device and its connected CE device, the UPE device quickly switches traffic to its remote UPE device, which then forwards the traffic to the CE device. The following uses Site2 as an example, as shown in Figure 2-108:
If the link from Site2_UPE3 to CE2 is faulty, traffic is forwarded to Site2_UPE4 through an LSP and then to CE2 using a private IP address, improving network reliability.
Deploy VPN GR on all UPE devices and SPE devices to ensure uninterrupted VPN traffic forwarding during an active/standby switchover on the device that is transmitting VPN services.
Procedure
- Configure SPE devices.
The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to that of Core_SPE1.
bgp 65000 graceful-restart //Enable BGP GR. # ipv4-family vpnv4 auto-frr //Enable VPNv4 FRR. bestroute nexthop-resolved tunnel //Configure the system to select a VPNv4 route only when the next hop recurses to a tunnel, preventing packet loss during traffic switchback. # ipv4-family vpn-instance vpna auto-frr //Enable VPN auto FRR. vpn-route cross multipath //Add multiple VPNv4 routes to a VPN instance with a different RD value from these routes' RD values to make VPN FRR take effect. #
- Configure UPE devices.
The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to that of Site1_UPE1.
ip vpn-instance vpna ipv4-family ip frr route-policy mixfrr //Enable IP FRR. # interface XGigabitEthernet1/0/4.200 vrrp vrid 1 virtual-ip 172.18.200.65 //Configure VRRP. vrrp vrid 1 preempt-mode timer delay 250 //Set the preemption delay of devices in a VRRP group. vrrp vrid 1 track bfd-session 2200 peer //Enable BFD for VRRP to implement master/backup switchovers. vrrp vrid 1 backup-forward //Enable the backup device to forward service traffic. vrrp track bfd gratuitous-arp send enable //Enable BFD for VRRP to quickly send gratuitous ARP packets during master/backup switchovers. # bfd vrrp-1 bind peer-ip 172.18.200.67 vpn-instance vpna interface XGigabitEthernet1/0/4.200 source-ip 172.18.200.66 //Configure static BFD for VRRP. discriminator local 2200 //Specify the local discriminator. The local discriminator on the local end must be the same as the remote discriminator on the remote end. discriminator remote 1200 //Specify a remote discriminator. detect-multiplier 8 //Specify the local BFD detection multiplier. min-tx-interval 3 //Set the minimum interval at which the local device sends BFD packets to 3.3 ms. min-rx-interval 3 //Set the minimum interval at which the local device receives BFD packets to 3.3 ms. commit //Commit the BFD session configuration. # bgp 65000 graceful-restart # ipv4-family vpn-instance vpna auto-frr # # route-policy mixfrr permit node 0 //Set the backup next-hop address to the IP address of loopback 1 on another UPE device at the same site. apply backup-nexthop 172.16.2.50 #
Verifying the Deployment
Run the display ip routing-table vpn-instance command on SPE devices to check the VPN FRR status from SPE devices to UPE devices.
The following uses the command output of Core_SPE2 as an example. The fields in boldface indicate the backup next hop, backup label, and backup tunnel ID. The command output shows that the VPN FRR entry from Core_SPE2 to a UPE device has been generated.
[Core_SPE2]display ip routing-table vpn-instance vpna 172.18.150.4 verbose Route Flags: R - relay, D - download to fib, T - to vpn-instance ------------------------------------------------------------------------------ Routing Table : 1 Summary Count : 1 Destination: 172.18.150.0/26 Protocol: IBGP Process ID: 0 Preference: 255 Cost: 0 NextHop: 172.16.2.75 Neighbour: 172.16.2.75 State: Active Adv Relied Age: 21h55m50s Tag: 0 Priority: low Label: 1025 QoSInfo: 0x0 IndirectID: 0x185 RelayNextHop: 0.0.0.0 Interface: Tunnel111 TunnelID: 0x2 Flags: RD BkNextHop: 172.16.2.76 BkInterface: Tunnel121 BkLabel: 1024 SecTunnelID: 0x0 BkPETunnelID: 0x3 BkPESecTunnelID: 0x0 BkIndirectID: 0xd
Run the display ip routing-table vpn-instance command on UPE devices to check the hybrid FRR status.
The following uses the command output of Site2_UPE3 as an example. The fields in boldface indicate the backup next hop, backup label, and backup tunnel ID. The command output shows that a hybrid FRR entry has been generated. The command output shows that the master hybrid FRR route points to the local sub-interface, and the backup route points to the UPE device with the IP address 172.16.2.76 at the same site.
[Site2_UPE3]display ip routing-table vpn-instance vpna 172.18.150.4 verbose Route Flags: R - relay, D - download to fib, T - to vpn-instance ------------------------------------------------------------------------------ Routing Table : 1 Summary Count : 2 Destination: 172.18.150.4/32 Protocol: Direct Process ID: 0 Preference: 0 Cost: 0 NextHop: 172.18.150.4 Neighbour: 0.0.0.0 State: Active Adv Age: 1d02h36m21s Tag: 0 Priority: high Label: NULL QoSInfo: 0x0 IndirectID: 0x0 RelayNextHop: 0.0.0.0 Interface: XGigabitEthernet0/0/2.150 TunnelID: 0x0 Flags: D BkNextHop: 172.16.2.76 BkInterface: XGigabitEthernet0/0/4 BkLabel: 1024 SecTunnelID: 0x0 BkPETunnelID: 0x4800001b BkPESecTunnelID: 0x0 BkIndirectID: 0x0 Destination: 172.18.150.4/32 Protocol: IBGP Process ID: 0 Preference: 255 Cost: 0 NextHop: 172.16.2.76 Neighbour: 172.16.2.76 State: Inactive Adv Relied Age: 1d02h36m21s Tag: 0 Priority: low Label: 1024 QoSInfo: 0x0 IndirectID: 0xcd RelayNextHop: 172.16.8.181 Interface: XGigabitEthernet0/0/4 TunnelID: 0x4800001b Flags: R
Run the display vrrp interface command to check the VRRP status.
The following uses the command output of Site2_UPE3 as an example. The fields in boldface indicate that the VRRP status of Site2_UPE3 is Master, the backup device has been configured to forward service traffic, and BFD for VRRP has been configured.
[Site2_UPE3]display vrrp interface XGigabitEthernet0/0/2.150 XGigabitEthernet0/0/2.150 | Virtual Router 1 State : Master Virtual IP : 172.18.150.1 Master IP : 172.18.150.2 PriorityRun : 100 PriorityConfig : 100 MasterPriority : 100 Preempt : YES Delay Time : 250 s TimerRun : 1 s TimerConfig : 1 s Auth type : NONE Virtual MAC : 0000-5e00-0101 Check TTL : YES Config type : normal-vrrp Backup-forward : enabled Track BFD : 1150 type: peer BFD-session state : UP Create time : 2016-05-21 11:02:27 Last change time : 2016-05-21 11:02:55
Configuration Files
Core_SPE1 configuration file
sysname Core_SPE1 # router id 172.16.0.5 # stp disable # ip vpn-instance vpna ipv4-family route-distinguisher 5:1 tnl-policy TSel vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # tunnel-selector TSel permit node 9 if-match ip next-hop ip-prefix core_nhp # tunnel-selector TSel permit node 10 apply tunnel-policy TE # bfd # mpls lsr-id 172.16.0.5 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # load-balance-profile CUSTOM ipv6 field l4-sport l4-dport ipv4 field l4-sport l4-dport # interface Eth-Trunk4 undo portswitch description Core_SPE1 to Core_SPE2 ip address 172.17.4.8 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group c mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface Eth-Trunk5 undo portswitch description Core_SPE1 to Core_SPE3 ip address 172.17.4.2 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 30 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface Eth-Trunk17 undo portswitch description Core_SPE1 to Site1_UPE1 ip address 172.17.4.10 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 4 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface XGigabitEthernet1/0/0 eth-trunk 5 # interface XGigabitEthernet1/0/1 eth-trunk 5 # interface XGigabitEthernet1/0/2 eth-trunk 5 # interface XGigabitEthernet1/0/3 eth-trunk 5 # interface XGigabitEthernet5/0/4 eth-trunk 4 # interface XGigabitEthernet5/0/5 eth-trunk 4 # interface XGigabitEthernet5/0/6 eth-trunk 4 # interface XGigabitEthernet5/0/7 eth-trunk 4 # interface XGigabitEthernet6/0/0 eth-trunk 17 # interface XGigabitEthernet6/0/1 eth-trunk 17 # interface XGigabitEthernet6/0/2 eth-trunk 17 # interface XGigabitEthernet6/0/3 eth-trunk 17 # interface XGigabitEthernet6/0/4 undo portswitch description Core_SPE1 to Site3_UPE6 ip address 172.17.10.2 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 20 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.0.5 255.255.255.255 # interface Tunnel611 description Core_SPE1 to Site1_UPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.51 mpls te tunnel-id 71 mpls te record-route mpls te affinity property 4 mask 4 mpls te affinity property 8 mask 8 secondary mpls te backup hot-standby mpls te commit # interface Tunnel622 description Core_SPE1 to Site1_UPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.50 mpls te tunnel-id 82 mpls te record-route mpls te affinity property 8 mask 8 mpls te affinity property 4 mask 4 secondary mpls te backup hot-standby mpls te commit # interface Tunnel711 description Core_SPE1 to Site3_UPE6 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.86 mpls te tunnel-id 311 mpls te record-route mpls te affinity property 20 mask 20 mpls te affinity property 10 mask 10 secondary mpls te backup hot-standby mpls te commit # interface Tunnel721 description Core_SPE1 to Site3_UPE5 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.87 mpls te tunnel-id 312 mpls te record-route mpls te affinity property 10 mask 10 mpls te affinity property 20 mask 20 secondary mpls te backup hot-standby mpls te commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.50 as-number 65000 peer 172.16.2.50 group devHost peer 172.16.2.51 as-number 65000 peer 172.16.2.51 group devHost peer 172.16.2.86 as-number 65000 peer 172.16.2.86 group devHost peer 172.16.2.87 as-number 65000 peer 172.16.2.87 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.50 enable undo peer 172.16.2.51 enable undo peer 172.16.0.3 enable undo peer 172.16.0.4 enable undo peer 172.16.2.86 enable undo peer 172.16.2.87 enable # ipv4-family vpnv4 policy vpn-target auto-frr nexthop recursive-lookup delay 10 tunnel-selector TSel bestroute nexthop-resolved tunnel route-select delay 120 peer devCore enable peer devCore route-policy core-import import peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer devHost enable peer devHost route-policy p_iBGP_RR_in import peer devHost advertise-community peer devHost upe peer devHost default-originate vpn-instance vpna peer 172.16.2.50 enable peer 172.16.2.50 group devHost peer 172.16.2.51 enable peer 172.16.2.51 group devHost peer 172.16.2.86 enable peer 172.16.2.86 group devHost peer 172.16.2.87 enable peer 172.16.2.87 group devHost # ipv4-family vpn-instance vpna default-route imported auto-frr nexthop recursive-lookup route-policy delay_policy nexthop recursive-lookup delay 10 vpn-route cross multipath route-select delay 120 # ospf 1 silent-interface all undo silent-interface Eth-Trunk4 undo silent-interface Eth-Trunk5 undo silent-interface Eth-Trunk17 undo silent-interface XGigabitEthernet6/0/4 spf-schedule-interval millisecond 10 lsa-originate-interval 0 lsa-arrival-interval 0 opaque-capability enable graceful-restart period 600 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#NInJJ<oF9VXb:BS~~9+JT'suROXkVHNG@8+*3FyB%^%# network 172.16.0.5 0.0.0.0 network 172.17.4.2 0.0.0.0 network 172.17.4.8 0.0.0.0 network 172.17.4.10 0.0.0.0 network 172.17.10.2 0.0.0.0 mpls-te enable # route-policy delay_policy permit node 0 if-match community-filter all_site # route-policy p_iBGP_RR_in deny node 5 if-match ip-prefix deny_host if-match community-filter all_site # route-policy p_iBGP_RR_in permit node 11 if-match community-filter site1 apply preferred-value 300 # route-policy p_iBGP_RR_in permit node 12 if-match community-filter site2 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 13 if-match community-filter site3 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 20 # route-policy core-import deny node 5 if-match community-filter site12 # route-policy core-import deny node 6 if-match community-filter site13 # route-policy core-import permit node 10 # ip ip-prefix deny_host index 10 permit 0.0.0.0 0 greater-equal 32 less-equal 32 ip ip-prefix core_nhp index 10 permit 172.16.0.3 32 ip ip-prefix core_nhp index 20 permit 172.16.0.4 32 # ip community-filter basic site1 permit 100:100 ip community-filter basic site2 permit 200:200 ip community-filter basic site3 permit 300:300 ip community-filter basic all_site permit 5720:5720 ip community-filter basic site12 permit 12:12 ip community-filter basic site13 permit 13:13 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # tunnel-policy TE tunnel select-seq cr-lsp load-balance-number 1 # bfd SPE1toSPE2 bind ldp-lsp peer-ip 172.16.0.3 nexthop 172.17.4.9 interface Eth-Trunk4 discriminator local 317 discriminator remote 137 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toSPE3 bind ldp-lsp peer-ip 172.16.0.4 nexthop 172.17.4.3 interface Eth-Trunk5 discriminator local 32 discriminator remote 23 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE1_b bind mpls-te interface Tunnel611 te-lsp backup discriminator local 6116 discriminator remote 6115 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE1_m bind mpls-te interface Tunnel611 te-lsp discriminator local 6112 discriminator remote 6111 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE2_b bind mpls-te interface Tunnel622 te-lsp backup discriminator local 6226 discriminator remote 6225 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE2_m bind mpls-te interface Tunnel622 te-lsp discriminator local 6222 discriminator remote 6221 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE5_b bind mpls-te interface Tunnel721 te-lsp backup discriminator local 7216 discriminator remote 7215 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE5_m bind mpls-te interface Tunnel721 te-lsp discriminator local 7212 discriminator remote 7211 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE6_b bind mpls-te interface Tunnel711 te-lsp backup discriminator local 7116 discriminator remote 7115 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE1toUPE6_m bind mpls-te interface Tunnel711 te-lsp discriminator local 7112 discriminator remote 7111 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Core_SPE2 configuration file
sysname Core_SPE2 # router id 172.16.0.3 # stp disable # ip vpn-instance vpna ipv4-family route-distinguisher 3:1 tnl-policy TSel vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # tunnel-selector TSel permit node 9 if-match ip next-hop ip-prefix core_nhp # tunnel-selector TSel permit node 10 apply tunnel-policy TE # bfd # mpls lsr-id 172.16.0.3 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # load-balance-profile CUSTOM ipv6 field l4-sport l4-dport ipv4 field l4-sport l4-dport # interface Eth-Trunk2 undo portswitch description Core_SPE2 to Core_SPE3 ip address 172.17.4.0 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface Eth-Trunk4 undo portswitch description Core_SPE2 to Core_SPE1 ip address 172.17.4.9 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group c mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface Eth-Trunk17 undo portswitch description Core_SPE2 to Site1_UPE2 ip address 172.17.4.12 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 8 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface XGigabitEthernet3/0/4 eth-trunk 2 # interface XGigabitEthernet3/0/5 eth-trunk 2 # interface XGigabitEthernet3/0/6 eth-trunk 2 # interface XGigabitEthernet3/0/7 eth-trunk 2 # interface XGigabitEthernet5/0/0 eth-trunk 17 # interface XGigabitEthernet5/0/1 eth-trunk 17 # interface XGigabitEthernet5/0/2 eth-trunk 17 # interface XGigabitEthernet5/0/3 eth-trunk 17 # interface XGigabitEthernet5/0/5 undo portswitch description Core_SPE2 to Site2_UPE3 ip address 172.16.8.178 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 1 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet6/0/4 eth-trunk 4 # interface XGigabitEthernet6/0/5 eth-trunk 4 # interface XGigabitEthernet6/0/6 eth-trunk 4 # interface XGigabitEthernet6/0/7 eth-trunk 4 # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.0.3 255.255.255.255 # interface Tunnel111 description Core_SPE2 to Site2_UPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.75 mpls te tunnel-id 111 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel121 description Core_SPE2 to Site2_UPE4 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.76 mpls te tunnel-id 121 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel612 description Core_SPE2 to Site1_UPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.51 mpls te tunnel-id 72 mpls te record-route mpls te affinity property 4 mask 4 mpls te affinity property 8 mask 8 secondary mpls te backup hot-standby mpls te commit # interface Tunnel621 description Core_SPE2 to Site1_UPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.50 mpls te tunnel-id 81 mpls te record-route mpls te affinity property 8 mask 8 mpls te affinity property 4 mask 4 secondary mpls te backup hot-standby mpls te commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.50 as-number 65000 peer 172.16.2.50 group devHost peer 172.16.2.51 as-number 65000 peer 172.16.2.51 group devHost peer 172.16.2.75 as-number 65000 peer 172.16.2.75 group devHost peer 172.16.2.76 as-number 65000 peer 172.16.2.76 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.50 enable undo peer 172.16.2.51 enable undo peer 172.16.2.75 enable undo peer 172.16.2.76 enable # ipv4-family vpnv4 policy vpn-target auto-frr nexthop recursive-lookup delay 10 tunnel-selector TSel bestroute nexthop-resolved tunnel route-select delay 120 peer devCore enable peer devCore route-policy core-import import peer devCore advertise-community peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer devHost enable peer devHost route-policy p_iBGP_RR_in import peer devHost advertise-community peer devHost upe peer devHost default-originate vpn-instance vpna peer 172.16.2.50 enable peer 172.16.2.50 group devHost peer 172.16.2.51 enable peer 172.16.2.51 group devHost peer 172.16.2.75 enable peer 172.16.2.75 group devHost peer 172.16.2.76 enable peer 172.16.2.76 group devHost # ipv4-family vpn-instance vpna default-route imported auto-frr nexthop recursive-lookup route-policy delay_policy nexthop recursive-lookup delay 10 vpn-route cross multipath route-select delay 120 # ospf 1 silent-interface all undo silent-interface Eth-Trunk2 undo silent-interface Eth-Trunk4 undo silent-interface Eth-Trunk17 undo silent-interface XGigabitEthernet5/0/5 spf-schedule-interval millisecond 10 lsa-originate-interval 0 lsa-arrival-interval 0 opaque-capability enable graceful-restart period 600 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#8|'*QyJCZ<@"H2,\pm@FUK3R3uSfFGaaJr39=1%^%# network 172.16.0.3 0.0.0.0 network 172.16.8.178 0.0.0.0 network 172.17.4.0 0.0.0.0 network 172.17.4.9 0.0.0.0 network 172.17.4.12 0.0.0.0 mpls-te enable # route-policy delay_policy permit node 0 if-match community-filter all_site # route-policy p_iBGP_RR_in deny node 5 if-match ip-prefix deny_host if-match community-filter all_site # route-policy p_iBGP_RR_in permit node 11 if-match community-filter site1 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 12 if-match community-filter site2 apply preferred-value 300 # route-policy p_iBGP_RR_in permit node 13 if-match community-filter site3 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 20 # route-policy core-import deny node 5 if-match community-filter site12 # route-policy core-import deny node 6 if-match community-filter site23 # route-policy core-import permit node 10 # ip ip-prefix deny_host index 10 permit 0.0.0.0 0 greater-equal 32 less-equal 32 ip ip-prefix core_nhp index 10 permit 172.16.0.4 32 ip ip-prefix core_nhp index 20 permit 172.16.0.5 32 # ip community-filter basic site1 permit 100:100 ip community-filter basic site2 permit 200:200 ip community-filter basic site3 permit 300:300 ip community-filter basic site12 permit 12:12 ip community-filter basic site23 permit 23:23 ip community-filter basic all_site permit 5720:5720 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # tunnel-policy TE tunnel select-seq cr-lsp load-balance-number 1 # bfd SPE2toSPE1 bind ldp-lsp peer-ip 172.16.0.5 nexthop 172.17.4.8 interface Eth-Trunk4 discriminator local 137 discriminator remote 317 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toSPE3 bind ldp-lsp peer-ip 172.16.0.4 nexthop 172.17.4.1 interface Eth-Trunk2 discriminator local 127 discriminator remote 217 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE1_b bind mpls-te interface Tunnel612 te-lsp backup discriminator local 6126 discriminator remote 6125 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE1_m bind mpls-te interface Tunnel612 te-lsp discriminator local 6122 discriminator remote 6121 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE2_b bind mpls-te interface Tunnel621 te-lsp backup discriminator local 6216 discriminator remote 6215 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE2_m bind mpls-te interface Tunnel621 te-lsp discriminator local 6212 discriminator remote 6211 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE3_b bind mpls-te interface Tunnel111 te-lsp backup discriminator local 1116 discriminator remote 1115 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE3_m bind mpls-te interface Tunnel111 te-lsp discriminator local 1112 discriminator remote 1111 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE4_b bind mpls-te interface Tunnel121 te-lsp backup discriminator local 1216 discriminator remote 1215 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE2toUPE4_m bind mpls-te interface Tunnel121 te-lsp discriminator local 1212 discriminator remote 1211 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Core_SPE3 configuration file
sysname Core_SPE3 # router id 172.16.0.4 # stp disable # ip vpn-instance vpna ipv4-family route-distinguisher 4:1 tnl-policy TSel vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # tunnel-selector TSel permit node 9 if-match ip next-hop ip-prefix core_nhp # tunnel-selector TSel permit node 10 apply tunnel-policy TE # bfd # mpls lsr-id 172.16.0.4 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # load-balance-profile CUSTOM ipv6 field l4-sport l4-dport ipv4 field l4-sport l4-dport # interface Eth-Trunk2 undo portswitch description Core_SPE3 to Core_SPE2 ip address 172.17.4.1 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface Eth-Trunk5 undo portswitch description Core_SPE3 to Core_SPE1 ip address 172.17.4.3 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 30 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 load-balance enhanced profile CUSTOM # interface XGigabitEthernet1/0/0 eth-trunk 5 # interface XGigabitEthernet1/0/1 eth-trunk 5 # interface XGigabitEthernet1/0/2 eth-trunk 5 # interface XGigabitEthernet1/0/3 eth-trunk 5 # interface XGigabitEthernet2/0/4 eth-trunk 2 # interface XGigabitEthernet2/0/5 eth-trunk 2 # interface XGigabitEthernet2/0/6 eth-trunk 2 # interface XGigabitEthernet2/0/7 eth-trunk 2 # interface XGigabitEthernet6/0/1 undo portswitch description Core_SPE3 to Site3_UPE5 ip address 172.16.8.213 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 10 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet6/0/3 undo portswitch description Core_SPE3 to Site2_UPE4 ip address 172.16.8.183 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 2 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.0.4 255.255.255.255 # interface Tunnel112 description Core_SPE3 to Site2_UPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.75 mpls te tunnel-id 112 mpls te bfd enable mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # interface Tunnel122 description Core_SPE3 to Site2_UPE4 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.76 mpls te tunnel-id 122 mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # interface Tunnel712 description Core_SPE3 to Site3_UPE6 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.86 mpls te tunnel-id 321 mpls te record-route mpls te affinity property 10 mask 10 mpls te affinity property 20 mask 20 secondary mpls te backup hot-standby mpls te commit # interface Tunnel722 description Core_SPE3 to Site3_UPE5 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.2.87 mpls te tunnel-id 322 mpls te record-route mpls te affinity property 10 mask 10 mpls te affinity property 20 mask 20 secondary mpls te backup hot-standby mpls te commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.75 as-number 65000 peer 172.16.2.75 group devHost peer 172.16.2.76 as-number 65000 peer 172.16.2.76 group devHost peer 172.16.2.86 as-number 65000 peer 172.16.2.86 group devHost peer 172.16.2.87 as-number 65000 peer 172.16.2.87 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.0.3 enable undo peer 172.16.0.5 enable undo peer 172.16.2.75 enable undo peer 172.16.2.76 enable undo peer 172.16.2.86 enable undo peer 172.16.2.87 enable # ipv4-family vpnv4 policy vpn-target auto-frr nexthop recursive-lookup delay 10 tunnel-selector TSel bestroute nexthop-resolved tunnel route-select delay 120 peer devCore enable peer devCore route-policy core-import import peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer devHost enable peer devHost route-policy p_iBGP_RR_in import peer devHost advertise-community peer devHost upe peer devHost default-originate vpn-instance vpna peer 172.16.2.75 enable peer 172.16.2.75 group devHost peer 172.16.2.76 enable peer 172.16.2.76 group devHost peer 172.16.2.86 enable peer 172.16.2.86 group devHost peer 172.16.2.87 enable peer 172.16.2.87 group devHost # ipv4-family vpn-instance vpna default-route imported auto-frr nexthop recursive-lookup route-policy delay_policy nexthop recursive-lookup delay 10 vpn-route cross multipath route-select delay 120 # ospf 1 silent-interface all undo silent-interface Eth-Trunk5 undo silent-interface Eth-Trunk2 undo silent-interface XGigabitEthernet6/0/1 undo silent-interface XGigabitEthernet6/0/3 spf-schedule-interval millisecond 10 lsa-originate-interval 0 lsa-arrival-interval 0 opaque-capability enable graceful-restart period 600 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#N@WU@i600:_5W!%F!L~9%7ui(!x:VP5<mJ:z>zJX%^%# network 172.16.0.4 0.0.0.0 network 172.16.8.183 0.0.0.0 network 172.16.8.213 0.0.0.0 network 172.17.4.1 0.0.0.0 network 172.17.4.3 0.0.0.0 mpls-te enable # route-policy delay_policy permit node 0 # route-policy p_iBGP_RR_in deny node 5 if-match ip-prefix deny_host if-match community-filter all_site # route-policy p_iBGP_RR_in permit node 11 if-match community-filter site1 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 12 if-match community-filter site2 apply preferred-value 200 # route-policy p_iBGP_RR_in permit node 13 if-match community-filter site3 apply preferred-value 300 # route-policy p_iBGP_RR_in permit node 20 # route-policy core-import deny node 5 if-match community-filter site13 # route-policy core-import deny node 6 if-match community-filter site23 # route-policy core-import permit node 10 # ip ip-prefix deny_host index 10 permit 0.0.0.0 0 greater-equal 32 less-equal 32 ip ip-prefix core_nhp index 10 permit 172.16.0.3 32 ip ip-prefix core_nhp index 20 permit 172.16.0.5 32 # ip community-filter basic site1 permit 100:100 ip community-filter basic site2 permit 200:200 ip community-filter basic site3 permit 300:300 ip community-filter basic all_site permit 5720:5720 ip community-filter basic site13 permit 13:13 ip community-filter basic site23 permit 23:23 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # tunnel-policy TE tunnel select-seq cr-lsp load-balance-number 1 # bfd SPE3toSPE1 bind ldp-lsp peer-ip 172.16.0.5 nexthop 172.17.4.2 interface Eth-Trunk5 discriminator local 23 discriminator remote 32 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toSPE2 bind ldp-lsp peer-ip 172.16.0.3 nexthop 172.17.4.0 interface Eth-Trunk2 discriminator local 217 discriminator remote 127 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE3_b bind mpls-te interface Tunnel112 te-lsp backup discriminator local 1126 discriminator remote 1125 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE3_m bind mpls-te interface Tunnel112 te-lsp discriminator local 1122 discriminator remote 1121 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE4_b bind mpls-te interface Tunnel122 te-lsp backup discriminator local 1226 discriminator remote 1225 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE4_m bind mpls-te interface Tunnel122 te-lsp discriminator local 1222 discriminator remote 1221 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE5_b bind mpls-te interface Tunnel722 te-lsp backup discriminator local 7226 discriminator remote 7225 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE5_m bind mpls-te interface Tunnel722 te-lsp discriminator local 7222 discriminator remote 7221 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE6_b bind mpls-te interface Tunnel712 te-lsp backup discriminator local 7126 discriminator remote 7125 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd SPE3toUPE6_m bind mpls-te interface Tunnel712 te-lsp discriminator local 7122 discriminator remote 7121 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site1_UPE1 configuration file
sysname Site1_UPE1 # router id 172.16.2.51 # arp vlink-direct-route advertise # stp disable # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.51 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # interface Eth-Trunk7 undo portswitch description Site1_UPE1 TO Site1_UPE2 ip address 172.17.4.14 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group c mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 # interface Eth-Trunk17 undo portswitch description Site1_UPE1 to Core_SPE1 ip address 172.17.4.11 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 4 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 # interface XGigabitEthernet1/0/0 eth-trunk 17 # interface XGigabitEthernet1/0/1 eth-trunk 17 # interface XGigabitEthernet1/0/2 eth-trunk 17 # interface XGigabitEthernet1/0/3 eth-trunk 17 # interface XGigabitEthernet1/0/4 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet1/0/4.200 dot1q termination vid 200 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.200.66 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.200.65 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 2200 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet4/0/4 eth-trunk 7 # interface XGigabitEthernet4/0/5 eth-trunk 7 # interface XGigabitEthernet4/0/6 eth-trunk 7 # interface XGigabitEthernet4/0/7 eth-trunk 7 # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.51 255.255.255.255 # interface Tunnel611 description Site1_UPE1 to Core_SPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.5 mpls te tunnel-id 71 mpls te record-route mpls te affinity property 4 mask 4 mpls te affinity property 8 mask 8 secondary mpls te backup hot-standby mpls te commit # interface Tunnel612 description Site1_UPE1 to Core_SPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.3 mpls te tunnel-id 72 mpls te record-route mpls te affinity property 4 mask 4 mpls te affinity property 8 mask 8 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-1 bind peer-ip 172.18.200.67 vpn-instance vpna interface XGigabitEthernet1/0/4.200 source-ip 172.18.200.66 discriminator local 2200 discriminator remote 1200 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.50 as-number 65000 peer 172.16.2.50 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.50 enable undo peer 172.16.0.3 enable undo peer 172.16.0.5 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.3 preferred-value 200 peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer 172.16.0.5 preferred-value 300 peer devHost enable peer devHost advertise-community peer 172.16.2.50 enable peer 172.16.2.50 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # # ospf 1 silent-interface all undo silent-interface Eth-Trunk7 undo silent-interface Eth-Trunk17 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#nU!dUe#c'J!;/%*WtZxQ<gP:'zx_E2OQnML]q;s#%^%# network 172.16.2.51 0.0.0.0 network 172.17.4.11 0.0.0.0 network 172.17.4.14 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.50 # route-policy p_iBGP_host_ex permit node 0 apply community 100:100 5720:5720 12:12 # route-policy p_iBGP_RR_ex permit node 0 apply community 100:100 5720:5720 12:12 # arp expire-time 62640 arp static 172.18.200.68 xxxx-xxxx-xxxx vid 200 interface XGigabitEthernet1/0/4.200 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE1toSPE1_m_b bind mpls-te interface Tunnel611 te-lsp backup discriminator local 6115 discriminator remote 6116 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE1toSPE1_m bind mpls-te interface Tunnel611 te-lsp discriminator local 6111 discriminator remote 6112 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE1toSPE2_b bind mpls-te interface Tunnel612 te-lsp backup discriminator local 6125 discriminator remote 6126 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE1toSPE2_m bind mpls-te interface Tunnel612 te-lsp discriminator local 6121 discriminator remote 6122 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site1_UPE2 configuration file
sysname Site1_UPE2 # router id 172.16.2.50 # arp vlink-direct-route advertise # stp disable # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.50 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # # interface Eth-Trunk7 undo portswitch description Site1_UPE2 to Site1_UPE1 ip address 172.17.4.15 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group c mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 # interface Eth-Trunk17 undo portswitch description Site1_UPE2 to Core_SPE2 ip address 172.17.4.13 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 8 mpls rsvp-te mpls rsvp-te hello mpls ldp mode lacp least active-linknumber 4 # interface XGigabitEthernet1/0/4 port link-type trunk # interface XGigabitEthernet1/0/4.200 dot1q termination vid 200 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.200.67 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.200.65 vrrp vrid 1 priority 90 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 1200 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet6/0/0 eth-trunk 17 # interface XGigabitEthernet6/0/1 eth-trunk 17 # interface XGigabitEthernet6/0/2 eth-trunk 17 # interface XGigabitEthernet6/0/3 eth-trunk 17 # interface XGigabitEthernet6/0/4 eth-trunk 7 # interface XGigabitEthernet6/0/5 eth-trunk 7 # interface XGigabitEthernet6/0/6 eth-trunk 7 # interface XGigabitEthernet6/0/7 eth-trunk 7 # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.50 255.255.255.255 # interface Tunnel621 description Site1_UPE2 to Core_SPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.3 mpls te tunnel-id 81 mpls te record-route mpls te affinity property 8 mask 8 mpls te affinity property 4 mask 4 secondary mpls te backup hot-standby mpls te commit # interface Tunnel622 description Site1_UPE2 to Core_SPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.5 mpls te tunnel-id 82 mpls te record-route mpls te affinity property 8 mask 8 mpls te affinity property 4 mask 4 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-1 bind peer-ip 172.18.200.66 vpn-instance vpna interface XGigabitEthernet1/0/4.200 source-ip 172.18.200.67 discriminator local 1200 discriminator remote 2200 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.51 as-number 65000 peer 172.16.2.51 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.2.51 enable undo peer 172.16.0.3 enable undo peer 172.16.0.5 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.3 preferred-value 300 peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer 172.16.0.5 preferred-value 200 peer devHost enable peer devHost advertise-community peer 172.16.2.51 enable peer 172.16.2.51 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # # ospf 1 silent-interface all undo silent-interface Eth-Trunk7 undo silent-interface Eth-Trunk17 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#GUPhWw-[LH2O6#NMxtJAl!Io8W~iF'![mQF[\9GI%^%# network 172.16.2.50 0.0.0.0 network 172.16.2.92 0.0.0.0 network 172.17.4.13 0.0.0.0 network 172.17.4.15 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.51 # route-policy p_iBGP_host_ex permit node 0 apply community 200:200 5720:5720 12:12 # route-policy p_iBGP_RR_ex permit node 0 apply community 200:200 5720:5720 12:12 # arp expire-time 62640 arp static 172.18.200.68 0001-0002-0003 vid 200 interface XGigabitEthernet1/0/4.200 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE2toSPE1_b bind mpls-te interface Tunnel622 te-lsp backup discriminator local 6225 discriminator remote 6226 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE2toSPE1_m bind mpls-te interface Tunnel622 te-lsp discriminator local 6221 discriminator remote 6222 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE2toSPE2_b bind mpls-te interface Tunnel621 te-lsp backup discriminator local 6215 discriminator remote 6216 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE2toSPE2_m bind mpls-te interface Tunnel621 te-lsp discriminator local 6211 discriminator remote 6212 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site2_UPE3 configuration file
sysname Site2_UPE3 # router id 172.16.2.75 # arp vlink-direct-route advertise # stp disable # set service-mode enhanced # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.75 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # interface XGigabitEthernet0/0/1 undo portswitch description Site2_UPE3 to Core_SPE2 ip address 172.16.8.179 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 1 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet0/0/2 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet0/0/2.150 dot1q termination vid 150 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.150.2 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.150.1 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 2150 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet0/0/4 undo portswitch description Site2_UPE3 to Site2_UPE4 ip address 172.16.8.180 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.75 255.255.255.255 # interface Tunnel111 description Site2_UPE3 to Core_SPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.3 mpls te tunnel-id 111 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel112 description Site2_UPE3 to Core_SPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.4 mpls te tunnel-id 112 mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-1 bind peer-ip 172.18.150.3 vpn-instance vpna interface XGigabitEthernet0/0/2.150 source-ip 172.18.150.2 discriminator local 2150 discriminator remote 1150 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.76 as-number 65000 peer 172.16.2.76 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.0.3 enable undo peer 172.16.0.4 enable undo peer 172.16.2.76 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.3 preferred-value 300 peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer 172.16.0.4 preferred-value 200 peer devHost enable peer devHost advertise-community peer 172.16.2.76 enable peer 172.16.2.76 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # ospf 1 silent-interface all undo silent-interface XGigabitEthernet0/0/1 undo silent-interface XGigabitEthernet0/0/4 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#zJm-P{(FiMrB0bLa^ST'z[!(UezNNTx\CQ6@N\,K%^%# network 172.16.2.75 0.0.0.0 network 172.16.8.179 0.0.0.0 network 172.16.8.180 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.76 # route-policy p_iBGP_host_ex permit node 10 apply community 200:200 5720:5720 23:23 # route-policy p_iBGP_RR_ex permit node 0 apply community 200:200 5720:5720 23:23 # arp expire-time 62640 arp static 172.18.150.4 00e0-fc12-3456 vid 150 interface XGigabitEthernet0/0/2.150 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE3toSPE2_b bind mpls-te interface Tunnel111 te-lsp backup discriminator local 1115 discriminator remote 1116 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE3toSPE2_m bind mpls-te interface Tunnel111 te-lsp discriminator local 1111 discriminator remote 1112 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE3toSPE3_b bind mpls-te interface Tunnel112 te-lsp backup discriminator local 1125 discriminator remote 1126 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE3toSPE3_m bind mpls-te interface Tunnel112 te-lsp discriminator local 1121 discriminator remote 1122 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site2_UPE4 configuration file
sysname Site2_UPE4 # router id 172.16.2.76 # arp vlink-direct-route advertise # stp disable # set service-mode enhanced # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.76 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # interface XGigabitEthernet0/0/1 undo portswitch description Site2_UPE4 to Core_SPE3 ip address 172.16.8.182 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 2 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet0/0/2 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet0/0/2.150 dot1q termination vid 150 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.150.3 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.150.1 vrrp vrid 1 priority 90 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 1150 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet0/0/4 undo portswitch description Site2_UPE4 to Site2_UPE3 ip address 172.16.8.181 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.76 255.255.255.255 # interface Tunnel121 description Site2_UPE4 to Core_SPE2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.3 mpls te tunnel-id 121 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel122 description Site2_UPE4 to Core_SPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.4 mpls te tunnel-id 122 mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-1 bind peer-ip 172.18.150.2 vpn-instance vpna interface XGigabitEthernet0/0/2.150 source-ip 172.18.150.3 discriminator local 1150 discriminator remote 2150 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.3 as-number 65000 peer 172.16.0.3 group devCore peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.75 as-number 65000 peer 172.16.2.75 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.0.3 enable undo peer 172.16.0.4 enable undo peer 172.16.2.75 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.3 enable peer 172.16.0.3 group devCore peer 172.16.0.3 preferred-value 200 peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer 172.16.0.4 preferred-value 300 peer devHost enable peer devHost advertise-community peer 172.16.2.75 enable peer 172.16.2.75 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # ospf 1 silent-interface all undo silent-interface XGigabitEthernet0/0/1 undo silent-interface XGigabitEthernet0/0/4 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %^%#"sZy-UeQ88(kmb#.o"Y8*@/_9D[_<-3ET`+!1no4%^%# network 172.16.2.76 0.0.0.0 network 172.16.8.181 0.0.0.0 network 172.16.8.182 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.75 # route-policy p_iBGP_host_ex permit node 0 apply community 300:300 5720:5720 23:23 # route-policy p_iBGP_RR_ex permit node 0 apply community 300:300 5720:5720 23:23 # arp expire-time 62640 arp static 172.18.150.4 0000-0001-0003 vid 150 interface XGigabitEthernet0/0/2.150 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE4toSPE2_b bind mpls-te interface Tunnel121 te-lsp backup discriminator local 1215 discriminator remote 1216 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE4toSPE2_m bind mpls-te interface Tunnel121 te-lsp discriminator local 1211 discriminator remote 1212 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE4toSPE3_b bind mpls-te interface Tunnel122 te-lsp backup discriminator local 1225 discriminator remote 1226 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE4toSPE3_m bind mpls-te interface Tunnel122 te-lsp discriminator local 1221 discriminator remote 1222 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site3_UPE5 configuration file
sysname Site3_UPE5 # router id 172.16.2.87 # arp vlink-direct-route advertise # stp disable # set service-mode enhanced # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.87 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # interface XGigabitEthernet0/0/2 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet0/0/2.100 dot1q termination vid 100 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.100.2 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.100.1 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 2150 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet0/0/1 undo portswitch description Site3_UPE5 to Site3_UPE6 ip address 172.17.10.0 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet0/0/4 undo portswitch description Site3_UPE5 to Core_SPE3 ip address 172.16.8.212 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 2 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.87 255.255.255.255 # interface Tunnel721 description Site3_UPE5 to Core_SPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.5 mpls te tunnel-id 312 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel722 description Site3_UPE5 to Core_SPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.4 mpls te tunnel-id 322 mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-2000 bind peer-ip 172.18.100.3 vpn-instance vpna interface XGigabitEthernet0/0/2.100 source-ip 172.18.100.2 auto min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.86 as-number 65000 peer 172.16.2.86 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.0.4 enable undo peer 172.16.0.5 enable undo peer 172.16.2.86 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer 172.16.0.4 preferred-value 300 peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer 172.16.0.5 preferred-value 200 peer devHost enable peer devHost advertise-community peer 172.16.2.86 enable peer 172.16.2.86 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # ospf 1 silent-interface all undo silent-interface XGigabitEthernet0/0/1 undo silent-interface XGigabitEthernet0/0/4 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %#%#^tB:@vm8r%4Z0),RRem7dU.A3.}(a&*/IhJ70>y9%#%# network 172.16.2.87 0.0.0.0 network 172.16.8.212 0.0.0.0 network 172.17.10.0 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.86 # route-policy p_iBGP_host_ex permit node 0 apply community 300:300 5720:5720 13:13 # route-policy p_iBGP_RR_ex permit node 0 apply community 300:300 5720:5720 13:13 # arp expire-time 62640 arp static 172.18.100.4 00e0-fc12-3456 vid 100 interface XGigabitEthernet0/0/2.100 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE5toSPE1_b bind mpls-te interface Tunnel721 te-lsp backup discriminator local 7215 discriminator remote 7216 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE5toSPE1_m bind mpls-te interface Tunnel721 te-lsp discriminator local 7211 discriminator remote 7212 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE5toSPE3_b bind mpls-te interface Tunnel722 te-lsp backup discriminator local 7225 discriminator remote 7226 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE5toSPE3_m bind mpls-te interface Tunnel722 te-lsp discriminator local 7221 discriminator remote 7222 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return
Site3_UPE6 configuration file
sysname Site3_UPE6 # router id 172.16.2.86 # arp vlink-direct-route advertise # stp disable # set service-mode enhanced # ip vpn-instance vpna ipv4-family route-distinguisher 1:1 ip frr route-policy mixfrr tnl-policy TSel arp vlink-direct-route advertise vpn-target 0:1 export-extcommunity vpn-target 0:1 import-extcommunity # bfd # mpls lsr-id 172.16.2.86 mpls mpls te label advertise non-null mpls rsvp-te mpls rsvp-te hello mpls rsvp-te hello full-gr mpls te cspf # mpls ldp graceful-restart # interface XGigabitEthernet0/0/2 port link-type trunk undo port trunk allow-pass vlan 1 # interface XGigabitEthernet0/0/2.100 dot1q termination vid 100 ip binding vpn-instance vpna arp direct-route enable ip address 172.18.100.3 255.255.255.192 vrrp vrid 1 virtual-ip 172.18.100.1 vrrp vrid 1 priority 90 vrrp vrid 1 preempt-mode timer delay 250 vrrp vrid 1 track bfd-session 2150 peer vrrp vrid 1 backup-forward arp broadcast enable vrrp track bfd gratuitous-arp send enable # interface XGigabitEthernet0/0/1 undo portswitch description Site3_UPE6 to Site3_UPE5 ip address 172.17.10.1 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 3 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface XGigabitEthernet0/0/4 undo portswitch description Site3_UPE6 to Core_SPE1 ip address 172.17.10.3 255.255.255.254 ospf network-type p2p ospf ldp-sync ospf timer ldp-sync hold-down 20 mpls mpls te mpls te link administrative group 1 mpls rsvp-te mpls rsvp-te hello mpls ldp # interface LoopBack1 description ** GRT Management Loopback ** ip address 172.16.2.86 255.255.255.255 # interface Tunnel711 description Site3_UPE6 to Core_SPE1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.5 mpls te tunnel-id 311 mpls te record-route mpls te affinity property 1 mask 1 mpls te affinity property 2 mask 2 secondary mpls te backup hot-standby mpls te commit # interface Tunnel712 description Site3_UPE6 to Core_SPE3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 172.16.0.4 mpls te tunnel-id 321 mpls te record-route mpls te affinity property 2 mask 2 mpls te affinity property 1 mask 1 secondary mpls te backup hot-standby mpls te commit # bfd vrrp-1 bind peer-ip 172.18.100.2 vpn-instance vpna interface XGigabitEthernet0/0/2.100 source-ip 172.18.100.3 auto min-tx-interval 3 min-rx-interval 3 commit # bgp 65000 graceful-restart group devCore internal peer devCore connect-interface LoopBack1 peer 172.16.0.4 as-number 65000 peer 172.16.0.4 group devCore peer 172.16.0.5 as-number 65000 peer 172.16.0.5 group devCore group devHost internal peer devHost connect-interface LoopBack1 peer 172.16.2.87 as-number 65000 peer 172.16.2.87 group devHost # ipv4-family unicast undo synchronization undo peer devCore enable undo peer devHost enable undo peer 172.16.0.4 enable undo peer 172.16.0.5 enable undo peer 172.16.2.87 enable # ipv4-family vpnv4 policy vpn-target route-select delay 120 peer devCore enable peer devCore route-policy p_iBGP_host_ex export peer devCore advertise-community peer 172.16.0.4 enable peer 172.16.0.4 group devCore peer 172.16.0.4 preferred-value 200 peer 172.16.0.5 enable peer 172.16.0.5 group devCore peer 172.16.0.5 preferred-value 300 peer devHost enable peer devHost advertise-community peer 172.16.2.87 enable peer 172.16.2.87 group devHost # ipv4-family vpn-instance vpna default-route imported import-route direct route-policy p_iBGP_RR_ex auto-frr route-select delay 120 # ospf 1 silent-interface all undo silent-interface XGigabitEthernet0/0/1 undo silent-interface XGigabitEthernet0/0/4 opaque-capability enable graceful-restart period 600 bandwidth-reference 100000 flooding-control area 0.0.0.0 authentication-mode hmac-sha256 1 cipher %#%#<3.TS63Ml*_Gn]2$}@O/G8llX)VNvDY\kT;4E9-A%#%# network 172.16.2.86 0.0.0.0 network 172.17.10.1 0.0.0.0 network 172.17.10.3 0.0.0.0 mpls-te enable # route-policy mixfrr permit node 0 apply backup-nexthop 172.16.2.87 # route-policy p_iBGP_host_ex permit node 0 apply community 100:100 5720:5720 13:13 # route-policy p_iBGP_RR_ex permit node 0 apply community 100:100 5720:5720 13:13 # arp expire-time 62640 arp static 172.18.100.4 00e0-fc12-3456 vid 100 interface XGigabitEthernet0/0/2.100 # tunnel-policy TSel tunnel select-seq cr-lsp lsp load-balance-number 1 # bfd UPE6toSPE1_b bind mpls-te interface Tunnel711 te-lsp backup discriminator local 7115 discriminator remote 7116 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE6toSPE1_m bind mpls-te interface Tunnel711 te-lsp discriminator local 7111 discriminator remote 7112 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE6toSPE3_b bind mpls-te interface Tunnel712 te-lsp backup discriminator local 7125 discriminator remote 7126 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # bfd UPE6toSPE3_m bind mpls-te interface Tunnel712 te-lsp discriminator local 7121 discriminator remote 7122 detect-multiplier 8 min-tx-interval 3 min-rx-interval 3 process-pst commit # return