No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

S2700, S3700, S5700, S6700, S7700, and S9700 Series Switches Typical Configuration Examples

This document provides examples for configuring features in typical usage scenarios.
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Deploying L3VPN Services and Protection (HoVPN)

Deploying L3VPN Services and Protection (HoVPN)

Configuration Roadmap

On a rail transmit bearer network, IP tunnels between nodes need to be enabled to bear L3VPN services. For example, set up a hierarchical L3VPN tunnel from Site1_UPE1 to Site2_UPE3 to transmit IP data services between Site1 and Site2, as shown in Figure 2-19.

Figure 2-19  Hierarchical L3VPN

The configuration roadmap is as follows:

  1. Deploy MP-BGP.

    • Set up MP-IBGP peer relationships between UPEs and SPEs and between SPEs.
    • Configure routing rules to enable traffic from UPEs to SPEs is forwarded through the default route and traffic from SPEs to UPEs is forwarded through specific routes.
    • Configure route priority policies to enable UPEs to forward traffic to other sites preferentially through SPEs directly connected to the UPEs.
    • Configure route priority policies to enable SPEs to forward traffic to other sites preferentially through UPEs directly connected to the SPEs.
    • Configure route filtering policies to disable SPEs from advertising ARP Vlink direct routes at the local sites to UPEs at other sites.
    • Configure route filtering policies to disable SPEs from receiving route information about sites directly connected to them from other SPEs, preventing route loops. For example, disable Core_SPE2 from receiving routes of Site1 from Core_SPE1 and routes of Site2 from Core_SPE3.
  2. Deploy VPN services.

    • Deploy VPN instances on UPEs and SPEs, and bind interfaces to the VPN instances on UPEs, but not on SPEs.
    • Preferentially use TE tunnels to bear VPN services on UPEs. In hybrid FRR mode, LSP tunnels can be used to bear VPN services.
    • Configure a tunnel policy selector on an SPE to enable the SPE to select any tunnel policy when the next-hop address of a VPNv4 route has the prefix of another SPE and to select a TE tunnel in other scenarios.
    • Deploy VRRP on two UPEs at a site, and send information about ARP Vlink direct routes to the neighboring SPEs so that the SPEs select the optimal route to send packets to the CE.
  3. Configure reliability protection.

    • Deploy VRRP on two UPEs at a site to implement gateway backup and ensure reliability of uplink traffic on CEs. Configure backup devices to forward service traffic, minimizing the impact of VRRP switchovers on services.
    • Deploy VPN FRR on a UPE. If the TE tunnel between the UPE and an SPE is faulty, traffic is automatically switched to the TE tunnel between the UPE and another SPE at the same site, minimizing the impact on VPN services.
    • Deploy VPN FRR on an SPE, for example Core_SPE1. If Core_SPE2 connected to SPE1 is faulty, Core_SPE1 switches VPN services to Core_SPE3, implementing fast E2E switchovers of VPN services.
    • Deploy VPN FRR on an SPE. If the TE tunnel between the SPE and a UPE is faulty, traffic is automatically switched to the TE tunnel between the SPE and another UPE at the same site, minimizing the impact on VPN services.
    • Deploy IP+VPN hybrid FRR on UPEs. If the interface of a UPE detects a fault on the link between the UPE and its connected CE, the UPE quickly switches traffic to its peer UPE, and the peer UPE then forwards the traffic to the CE.
    • Deploy VPN GR on all UPEs and SPEs to ensure uninterrupted VPN traffic forwarding during a master/backup switchover on the device transmitting VPN services.

Data Plan

NOTE:

The data provided in this section is used as an example, which may vary depending on the network scale and topology.

Table 2-21  Service interfaces

NE Role

Value

Remarks

Site1_UPE1

interface XGigabitEthernet1/0/4.200: 172.18.200.66/26

-

Site1_UPE2

interface XGigabitEthernet1/0/4.200: 172.18.200.67/26

-

Site2_UPE3

interface XGigabitEthernet0/0/2.150: 172.18.150.2/26

-

Site2_UPE4

interface XGigabitEthernet0/0/2.150: 172.18.150.3/26

-

Site3_UPE5

interface XGigabitEthernet0/0/2.100: 172.18.100.2/26

-

Site3_UPE6

interface XGigabitEthernet0/0/2.100: 172.18.100.3/26

-

Table 2-22  MPLS VPN parameters

Parameter

Value

Remarks

VPN instance name

vpna

-

RD value

UPE: 1:1

Core_SPE1: 5:1

Core_SPE2: 3:1

Core_SPE3: 4:1

It is recommended that the same RD value be set on UPEs and SPEs. If different RD values are set, to make VPN FRR take effect, you need to run the vpn-route cross multipath command to add multiple VPNv4 routes to a VPN instance with a different RD value from these routes' RD values.

RT

0:1

Plan the same RT on the entire network.

Table 2-23  BGP parameters

Parameter

Core_SPE1

Core_SPE2

Core_SPE3

Site1_UPE1

Site1_UPE2

Site2_UPE3

Site2_UPE4

Site3_UPE5

Site3_UPE6

BGP process ID

65000

65000

65000

65000

65000

65000

65000

65000

65000

Router ID

172.16.0.5

172.16.0.3

172.16.0.4

172.16.2.51

172.16.2.50

172.16.2.75

172.16.2.76

172.16.2.87

172.16.2.86

Peer group

devCore: 172.16.0.3, 172.16.0.4

devHost: 172.16.2.50, 172.16.2.51, 172.16.2.86, 172.16.2.87

devCore: 172.16.0.4, 172.16.0.5

devHost: 172.16.2.50, 172.16.2.51, 172.16.2.75, 172.16.2.76

devCore: 172.16.0.3, 172.16.0.5

devHost: 172.16.2.75, 172.16.2.76, 172.16.2.86, 172.16.2.87

devCore: 172.16.0.3, 172.16.0.5

devHost: 172.16.2.50

devCore: 172.16.0.3, 172.16.0.5

devHost: 172.16.2.51

devCore: 172.16.0.3, 172.16.0.4

devHost: 172.16.2.76

devCore: 172.16.0.3, 172.16.0.4

devHost: 172.16.2.75

devCore: 172.16.0.4, 172.16.0.5

devHost: 172.16.2.86

devCore: 172.16.0.4, 172.16.0.5

devHost: 172.16.2.87

policy vpn-target

Enable

Enable

Enable

Enable

Enable

Enable

Enable

Enable

Enable

Tunnel policy selector

Deploy

Deploy

Deploy

-

-

-

-

-

-

Peer priority

-

-

-

Improve the peer priority of Core_SPE1 so that UPEs preferentially select routes advertised from Core_SPE1.

Improve the peer priority of Core_SPE2 so that UPEs preferentially select routes advertised from Core_SPE2.

Improve the peer priority of Core_SPE2 so that UPEs preferentially select routes advertised from Core_SPE2.

Improve the peer priority of Core_SPE3 so that UPEs preferentially select routes advertised from Core_SPE3.

Improve the peer priority of Core_SPE3 so that UPEs preferentially select routes advertised from Core_SPE3.

Improve the peer priority of Core_SPE1 so that UPEs preferentially select routes advertised from Core_SPE1.

Configuring MP-BGP

BGP Connection Diagram

Procedure

  • Configure SPEs.

    The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to the configuration of Core_SPE1, and are not mentioned here.

    tunnel-selector TSel permit node 9
     if-match ip next-hop ip-prefix core_nhp    //Configure a tunnel policy selector to enable Core_SPE1 to select any tunnel to be iterated when the next-hop address of a VPNv4 route has the prefix of another SPE.
    #
    tunnel-selector TSel permit node 10    //Configure a tunnel policy selector to iterate a route received from an IBGP peer to a TE tunnel when the route needs to be forwarded to another IBGP peer and Core_SPE1 needs to modify the next hop of the route to itself.
     apply tunnel-policy TE
    #
    bgp 65000
     group devCore internal    //Create an IBGP peer group.
     peer devCore connect-interface LoopBack1     //Specify loopback interface 1 and its address as the source interface and address of BGP packets.
     peer 172.16.0.3 as-number 65000    //Set up a peer relationship between SPEs.
     peer 172.16.0.3 group devCore    //Add SPEs to the IBGP peer group.
     peer 172.16.0.4 as-number 65000
     peer 172.16.0.4 group devCore
     group devHost internal
     peer devHost connect-interface LoopBack1
     peer 172.16.2.50 as-number 65000
     peer 172.16.2.50 group devHost
     peer 172.16.2.51 as-number 65000
     peer 172.16.2.51 group devHost
     peer 172.16.2.86 as-number 65000
     peer 172.16.2.86 group devHost
     peer 172.16.2.87 as-number 65000
     peer 172.16.2.87 group devHost
     #
     ipv4-family unicast
      undo synchronization
      undo peer devCore enable
      undo peer devHost enable
      undo peer 172.16.2.50 enable
      undo peer 172.16.2.51 enable
      undo peer 172.16.0.3 enable
      undo peer 172.16.0.4 enable
      undo peer 172.16.2.86 enable
      undo peer 172.16.2.87 enable
     #
     ipv4-family vpnv4
      policy vpn-target
      tunnel-selector TSel    //An SPE advertises the default route to UPEs. The SPE modifies the next hop of UPEs' routes to itself and forwards the routes to other SPEs. Therefore, configure a tunnel policy selector to iterate BGP VPNv4 routes sent to UPEs to TE tunnels and to iterate BGP VPNv4 routes sent to other SPEs to LSPs.
      peer devCore enable
      peer devCore route-policy core-import import    //Configure Core_SPE1 to filter information about all routes of sites connected to Core_SPE1 when it receives routes from other SPEs.
      peer devCore advertise-community
      peer 172.16.0.3 enable
      peer 172.16.0.3 group devCore
      peer 172.16.0.4 enable
      peer 172.16.0.4 group devCore
      peer devHost enable
      peer devHost route-policy p_iBGP_RR_in import    //Configure Core_SPE1 to filter out host routes when receiving routes from UPEs; set the preferred value of the route between Core_SPE1 and its directly connected UPEs to 300, and set the preferred value of routes between Core_SPE1 and other UPEs to 200.
      peer devHost advertise-community    //Advertise community attributes to the IBGP peer group.
      peer devHost upe    //Configure the peer devHost as a UPE.
      peer devHost default-originate vpn-instance vpna    //Send the default route of VPN instance vpna to UPEs.
      peer 172.16.2.50 enable
      peer 172.16.2.50 group devHost
      peer 172.16.2.51 enable
      peer 172.16.2.51 group devHost
      peer 172.16.2.86 enable
      peer 172.16.2.86 group devHost
      peer 172.16.2.87 enable
      peer 172.16.2.87 group devHost
     #
    #
    route-policy p_iBGP_RR_in deny node 5    //Filter out host routes of all sites.
     if-match ip-prefix deny_host
     if-match community-filter all_site
    #
    route-policy p_iBGP_RR_in permit node 11    //Set the preferred value of the route between Core_SPE1 and its directly connected UPE to 300.
     if-match community-filter site1
     apply preferred-value 300
    #
    route-policy p_iBGP_RR_in permit node 12    //Set the preferred value of the route between Core_SPE1 and another UPE to 200.
     if-match community-filter site2
     apply preferred-value 200
    #
    route-policy p_iBGP_RR_in permit node 13    //Set the preferred value of the route between Core_SPE1 and another UPE to 200.
     if-match community-filter site3
     apply preferred-value 200
    #
    route-policy p_iBGP_RR_in permit node 20    //Permit all the other routes.
    #
    route-policy core-import deny node 5    //Deny all routes of sites directly connected to Core_SPE1.
     if-match community-filter site12
    #
    route-policy core-import deny node 6    //Deny all routes of sites directly connected to Core_SPE1.
     if-match community-filter site13
    #
    route-policy core-import permit node 10    //Permit all the other routes.
    #
    ip ip-prefix deny_host index 10 permit 0.0.0.0 0 greater-equal 32 less-equal 32    //Permit all 32-bit host routes and deny all the other routes.
    ip ip-prefix core_nhp index 10 permit 172.16.0.3 32
    ip ip-prefix core_nhp index 20 permit 172.16.0.4 32    //Permit routes to 172.16.0.3/32 and 172.16.0.4/32 and deny all the other routes.
    #
    ip community-filter basic site1 permit 100:100    //Create a community attribute filter site1 and set the community attribute to 100:100.
    ip community-filter basic site2 permit 200:200
    ip community-filter basic site3 permit 300:300
    ip community-filter basic all_site permit 5720:5720
    ip community-filter basic site12 permit 12:12
    ip community-filter basic site13 permit 13:13
    #

  • Configure UPEs.

    The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to the configuration of Site1_UPE1, and are not mentioned here.

    bgp 65000
     group devCore internal
     peer devCore connect-interface LoopBack1
     peer 172.16.0.3 as-number 65000
     peer 172.16.0.3 group devCore
     peer 172.16.0.5 as-number 65000
     peer 172.16.0.5 group devCore
     group devHost internal
     peer devHost connect-interface LoopBack1
     peer 172.16.2.50 as-number 65000
     peer 172.16.2.50 group devHost
     #
     ipv4-family unicast
      undo synchronization
      undo peer devCore enable
      undo peer devHost enable
      undo peer 172.16.2.50 enable
      undo peer 172.16.0.3 enable
      undo peer 172.16.0.5 enable
     #
     ipv4-family vpnv4
      policy vpn-target
      peer devCore enable
      peer devCore route-policy p_iBGP_host_ex export    //Configure the community attribute of routes that Site1_UPE1 sends to SPEs.
      peer devCore advertise-community
      peer 172.16.0.3 enable
      peer 172.16.0.3 group devCore
      peer 172.16.0.3 preferred-value 200    //Set the preferred value of the route between Site1_UPE1 and Core_SPE2 to 200.
      peer 172.16.0.5 enable
      peer 172.16.0.5 group devCore
      peer 172.16.0.5 preferred-value 300    //Set the priority of Core_SPE1 to 300 so that Site1_UPE1 preferentially selects routes advertised from Core_SPE1.
      peer devHost enable
      peer devHost advertise-community
      peer 172.16.2.50 enable
      peer 172.16.2.50 group devHost
     #
    #
    route-policy p_iBGP_host_ex permit node 0    //Add the community attribute for the route.
     apply community 100:100 5720:5720 12:12
    #
    

Checking the Configuration
  • Run the display bgp vpnv4 all peer command to check the BGP VPNv4 peer status.

    Using Core_SPE1 as an example, if the value of State is Established, BGP peer relationships have been set up successfully.

    [Core_SPE1]display bgp vpnv4 all peer
    
     BGP local router ID : 172.16.0.5
     Local AS number : 65000
     Total number of peers : 4                Peers in established state : 4
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State PrefRcv
    
      172.16.2.51     4       65000     2102     1859     0 20:55:17 Established     550
      172.16.2.86     4       65000     3673     2989     0 0026h03m Established     550
      172.16.0.3      4       65000     1659     1462     0 20:57:05 Established     200
      172.16.0.4      4       65000     3421     2494     0 0026h03m Established     200

Configuring an L3VPN

Context

VPN instances need to be configured to advertise VPNv4 routes and forward data to achieve communication over a L3VPN.

Procedure

  • Configure SPEs.

    The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to the configuration of Core_SPE1, and are not mentioned here.

    ip vpn-instance vpna    //Create a VPN instance.
     ipv4-family
      route-distinguisher 5:1    //Configure an RD.
      tnl-policy TSel    //Configure a TE tunnel for the VPN instance.
      vpn-target 0:1 export-extcommunity    //Configure the extended community attribute VPN target.
      vpn-target 0:1 import-extcommunity
    #
    bgp 65000
     #
     ipv4-family vpnv4
      nexthop recursive-lookup delay 10    //Set the next-hop iteration delay to 10s.
      route-select delay 120    //Set the route selection delay to 120s, preventing traffic interruption caused by fast route switchback.
     #
      ipv4-family vpn-instance vpna
      default-route imported    //Import the default route to VPN instance vpna.
      nexthop recursive-lookup route-policy delay_policy    //Configure BGP next-hop iteration based on the routing policy delay_policy.
      nexthop recursive-lookup delay 10
      route-select delay 120
    #
    route-policy delay_policy permit node 0    //Permit routes of all sites.
     if-match community-filter all_site
    #

  • Configure UPEs.

    The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to the configuration of Site1_UPE1, and are not mentioned here.

    arp vlink-direct-route advertise    //Advertise IPv4 ARP Vlink direct routes.
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 1:1
      tnl-policy TSel
      arp vlink-direct-route advertise
      vpn-target 0:1 export-extcommunity
      vpn-target 0:1 import-extcommunity
    #
    interface XGigabitEthernet1/0/4
     port link-type trunk
     undo port trunk allow-pass vlan 1
    #
    interface XGigabitEthernet1/0/4.200
     dot1q termination vid 200
     ip binding vpn-instance vpna    //Bind the VPN instance to the corresponding service interface.
     arp direct-route enable    //Configure the ARP module to report ARP Vlink direct routes to the RM module.
     ip address 172.18.200.66 255.255.255.192
     arp broadcast enable    //Enable ARP broadcast of a VLAN tag termination sub-interface.
    #
    bgp 65000
     #
     ipv4-family vpnv4
      route-select delay 120
     #
     ipv4-family vpn-instance vpna
      default-route imported
      import-route direct route-policy p_iBGP_RR_ex    //Import direct routes to VPN instance vpna and add the community attribute.
      route-select delay 120
     #
    #
    route-policy p_iBGP_RR_ex permit node 0    //Add the community attribute for the route.
     apply community 100:100 5720:5720 12:12
    #
    arp expire-time 62640    //Set the aging time of dynamic ARP entries.
    arp static 172.18.200.68 0001-0002-0003 vid 200 interface XGigabitEthernet1/0/4.200    //Configure a static ARP entry.
    #
    NOTE:

    Since V200R010C00, dynamic ARP is supported to meet reliability requirements in this scenario. Perform the following operations to implement dynamic ARP:

    • Run the arp learning passive enable command in the system view to enable passive ARP.
    • Run the arp auto-scan enable command in the sub-interface view to enable ARP automatic scanning on the sub-interface.

    After the preceding configuration is complete, you do not need to configure the aging time of dynamic ARP entries and static ARP entries.

Configuring Reliability Protection

Configuration Roadmap

The configuration roadmap is as follows:

  1. Deploy VRRP on two UPEs at a site to ensure reliability of uplink traffic on CEs. Site1 is used as an example, as shown in Figure 2-20.

    • Configure Site1_UPE1 as the master node and Site1_UPE2 as the backup node in a VRRP group. If Site1_UPE1 is faulty, uplink traffic on CE1 will be quickly switched to Site1_UPE2.

    • Configure BFD for VRRP so that hardware-based BFD can quickly detect faults. When a fault is detected, hardware notifies the backup device in a VRRP group to switch as the master device. Additionally, hardware directly sends gratuitous ARP packets to instruct devices at the access layer to forward traffic to the new master device.

    • Configure backup devices to forward service traffic. When the VRRP status of a device is Backup, the device can forward traffic as long as it receives traffic. This prevents service traffic loss and shortens service interruption time if the aggregation device is faulty.

    NOTE:

    If the number of VRRP groups exceeds the device default value, run the set vrrp max-group-number max-group-number command on the UPEs to set the maximum number of allowed VRRP groups.

    Figure 2-20  VRRP between two UPEs

  2. Deploy VPN FRR on a UPE. If the TE tunnel between the UPE and an SPE is faulty, traffic is automatically switched to the TE tunnel between the UPE and another SPE at the same site. Site1_UPE1 is used as an example, as shown in Figure 2-21.

    Site1_UPE1 has two TE tunnels to Core_SPE1 and Core_SPE2 respectively. Deploying VPN FRR on Site1_UPE1 ensures that traffic is quickly switched to Core_SPE2 if Core_SPE1 is faulty.

    Figure 2-21  VPN FRR from an aggregation device to a core device

  3. Deploy VPN FRR on an SPE, for example Core_SPE1. If Core_SPE2 connected to Core_SPE1 is faulty, Core_SPE1 switches VPN services to Core_SPE3, implementing fast E2E switchovers of VPN services, as shown in Figure 2-22.

    Figure 2-22  VPN FRR between core devices

  4. Deploy VPN FRR on an SPE. If the TE tunnel between the SPE and a UPE is faulty, traffic is automatically switched to the TE tunnel between the SPE and another UPE at the same site. Core_SPE2 is used as an example, as shown in Figure 2-23.

    Core_SPE2 has two TE tunnels to Site2_UPE3 and Site2_UPE4 respectively. Deploying VPN FRR on Core_SPE2 ensures that traffic is quickly switched to Site2_UPE4 if Site2_UPE3 is faulty.

    Figure 2-23  VPN FRR from a core device to an aggregation device

  5. Deploy IP+VPN hybrid FRR on UPEs. If the interface of a UPE detects a fault on the link between the UPE and its connected CE, the UPE quickly switches traffic to its peer UPE, and the peer UPE then forwards the traffic to the CE. Site2 is used as an example, as shown in Figure 2-24.

    If the link from Site2_UPE3 to CE2 is faulty, traffic is forwarded to Site2_UPE4 through an LSP and then to CE2 using a private IP address, improving network reliability.

    Figure 2-24  Deployment of IP+VPN hybrid FRR on UPEs

  6. Deploy VPN GR on all UPEs and SPEs to ensure uninterrupted VPN traffic forwarding during a master/backup switchover on the device transmitting VPN services.

Procedure

  • Configure SPEs.

    The following uses the configuration of Core_SPE1 on the core ring as an example. The configurations of Core_SPE2 and Core_SPE3 are similar to the configuration of Core_SPE1, and are not mentioned here.

    bgp 65000
     graceful-restart    //Enable BGP GR.
     #
     ipv4-family vpnv4
      auto-frr    //Enable VPNv4 FRR.
      bestroute nexthop-resolved tunnel    //Configure the system to select a VPNv4 route only when the next hop is iterated to a tunnel, preventing packet loss during a revertive switchover.
     #
      ipv4-family vpn-instance vpna
      auto-frr    //Enable VPN auto FRR.
      vpn-route cross multipath    //Add multiple VPNv4 routes to a VPN instance with a different RD value from these routes' RD values, making VPN FRR take effect.
    #

  • Configure UPEs.

    The following uses the configuration of Site1_UPE1 as an example. The configurations of Site1_UPE2, Site2_UPE3, Site2_UPE4, Site3_UPE5, and Site3_UPE6 are similar to the configuration of Site1_UPE1, and are not mentioned here.

    ip vpn-instance vpna
     ipv4-family
      ip frr route-policy mixfrr    //Enable IP FRR.
    #
    interface XGigabitEthernet1/0/4.200
     vrrp vrid 1 virtual-ip 172.18.200.65    //Configure VRRP.
     vrrp vrid 1 preempt-mode timer delay 250    //Set the preemption delay of switches in a VRRP group.
     vrrp vrid 1 track bfd-session 2200 peer    //Enable BFD for VRRP to implement master/backup switchovers.
     vrrp vrid 1 backup-forward    //Enable the backup device to forward service traffic.
     vrrp track bfd gratuitous-arp send enable    //Enable BFD for VRRP to quickly send gratuitous ARP packets during master/backup switchovers.
    #
    bfd vrrp-1 bind peer-ip 172.18.200.67 vpn-instance vpna interface XGigabitEthernet1/0/4.200 source-ip 172.18.200.66    //Configure static BFD for VRRP.
     discriminator local 2200    //Set the local discriminator. The local discriminator of the local system must be the same as the remote discriminator of the remote system.
     discriminator remote 1200    //Set the remote discriminator.
     detect-multiplier 8    //Set the local detection multiplier of BFD.
     min-tx-interval 3    //Set the minimum interval at which the local device sends BFD packets to 3.3 ms.
     min-rx-interval 3    //Set the minimum interval at which the local device receives BFD packets to 3.3 ms.
     commit    //Commit the BFD session configuration.
    #
    bgp 65000
     graceful-restart 
     #
     ipv4-family vpn-instance vpna
      auto-frr
     #
    #
    route-policy mixfrr permit node 0    //Set the backup next hop to the loopback interface 1 of another UPE at the same site.
     apply backup-nexthop 172.16.2.50
    #

Checking the Configuration
  • Run the display ip routing-table vpn-instance command on SPEs to check the VPN FRR status from SPEs to UPEs.

    The command output on Core_SPE2 is used as an example. The fields in boldface indicate the backup next hop, backup label, and backup tunnel ID. The command output shows that the hybrid FRR entry from Core_SPE2 to a UPE has been generated.

    [Core_SPE2]display ip routing-table vpn-instance vpna 172.18.150.4 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance
    ------------------------------------------------------------------------------
    Routing Table : 1
    Summary Count : 1
    
    Destination: 172.18.150.0/26
         Protocol: IBGP             Process ID: 0
       Preference: 255                    Cost: 0
          NextHop: 172.16.2.75       Neighbour: 172.16.2.75
            State: Active Adv Relied       Age: 21h55m50s
              Tag: 0                  Priority: low
            Label: 1025                QoSInfo: 0x0
       IndirectID: 0x185            
     RelayNextHop: 0.0.0.0           Interface: Tunnel111
         TunnelID: 0x2                   Flags: RD
        BkNextHop: 172.16.2.76     BkInterface: Tunnel121
          BkLabel: 1024            SecTunnelID: 0x0              
     BkPETunnelID: 0x3         BkPESecTunnelID: 0x0              
     BkIndirectID: 0xd
  • Run the display ip routing-table vpn-instance command on UPEs to check the hybrid FRR status.

    The command output on Site2_UPE3 is used as an example. The fields in boldface indicate the backup next hop, backup label, and backup tunnel ID. The command output shows that the hybrid FRR entry has been generated. The command output shows that the master hybrid FRR route is to the local sub-interface, and the backup route is to the UPE with IP address 172.16.2.76 at the same site.

    [Site2_UPE3]display ip routing-table vpn-instance vpna 172.18.150.4 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance
    ------------------------------------------------------------------------------
    Routing Table : 1
    Summary Count : 2
    
    Destination: 172.18.150.4/32
         Protocol: Direct           Process ID: 0
       Preference: 0                      Cost: 0
          NextHop: 172.18.150.4      Neighbour: 0.0.0.0
            State: Active Adv              Age: 1d02h36m21s
              Tag: 0                  Priority: high
            Label: NULL                QoSInfo: 0x0
       IndirectID: 0x0              
     RelayNextHop: 0.0.0.0           Interface: XGigabitEthernet0/0/2.150
         TunnelID: 0x0                   Flags:  D
        BkNextHop: 172.16.2.76     BkInterface: XGigabitEthernet0/0/4
          BkLabel: 1024            SecTunnelID: 0x0              
     BkPETunnelID: 0x4800001b  BkPESecTunnelID: 0x0              
     BkIndirectID: 0x0       
    
    Destination: 172.18.150.4/32
         Protocol: IBGP             Process ID: 0
       Preference: 255                    Cost: 0
          NextHop: 172.16.2.76       Neighbour: 172.16.2.76
            State: Inactive Adv Relied     Age: 1d02h36m21s
              Tag: 0                  Priority: low
            Label: 1024                QoSInfo: 0x0
       IndirectID: 0xcd             
     RelayNextHop: 172.16.8.181      Interface: XGigabitEthernet0/0/4
         TunnelID: 0x4800001b            Flags: R
  • Run the display vrrp interface command to check the VRRP status.

    The command output on Site2_UPE3 is used as an example. The fields in boldface in the command output indicate that the VRRP status of Site2_UPE3 is Master, the backup device has been configured to forward service traffic, and BFD for VRRP has been configured.

    [Site2_UPE3]display vrrp interface XGigabitEthernet0/0/2.150
      XGigabitEthernet0/0/2.150 | Virtual Router 1
        State : Master
        Virtual IP : 172.18.150.1
        Master IP : 172.18.150.2
        PriorityRun : 100
        PriorityConfig : 100
        MasterPriority : 100
        Preempt : YES   Delay Time : 250 s
        TimerRun : 1 s
        TimerConfig : 1 s
        Auth type : NONE
        Virtual MAC : 0000-5e00-0101
        Check TTL : YES
        Config type : normal-vrrp
        Backup-forward : enabled
        Track BFD : 1150  type: peer 
        BFD-session state : UP
        Create time : 2016-05-21 11:02:27
        Last change time : 2016-05-21 11:02:55
Download
Updated: 2019-04-20

Document ID: EDOC1000069520

Views: 564418

Downloads: 28732

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next