No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - MPLS

S7700 and S9700 V200R013C00

This document describes the configurations of MPLS, including Static LSP, MPLS LDP, MPLS QoS, MPLS TE, MPLS OAM, Seamless MPLS.
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
LDP Reliability

LDP Reliability

Overview of LDP Reliability

Reliability measures are used to ensure the reliability of LSPs. LSPs are paths established through LDP.

LDP LSP reliability technologies are necessary for the following reasons:
  • If a node or link on a working LDP LSP fails, reliability technologies are used to establish a backup LDP LSP and switch traffic to the backup LDP LSP, while minimizing packet losses in the process.

  • If a node on a working LDP LSP encounters a control plane failure but the forwarding plane is still working, reliability technologies ensure traffic forwarding during fault recovery on the control plane.

MPLS provides multiple reliability technologies to ensure the high reliability of key services transmitted over LDP LSP. The following table describes the LDP reliability technologies.
Table 3-4  LDP reliability technologies

Reliability Technology

Description

Function

Fault detection

Rapidly detects faults on LDP LSPs of an MPLS network and triggers protection switching.

Traffic protection

Ensures traffic is switched to the backup LDP LSP and minimizes packet loss when the working LDP LSP fails.

Ensures nonstop forwarding on the forwarding plane when the control plane fails on a node.

BFD for LDP LSP

Bidirectional Forwarding Detection (BFD) improves network reliability by quickly detecting LDP LSP faults and triggering traffic switchover upon LDP LSP faults.

Background

If a node or link along a working LDP LSP fails, traffic is switched to the backup LSP. The path switchover time depends on the speed of fault detection and traffic switching. A slow path switchover causes long-time traffic loss. Fast traffic switching can be ensured by LDP FRR. The fault detection mechanism of LDP is slow so traffic switching between primary and backup LDP LSPs takes a relatively long time, causing traffic loss.

Figure 3-9 shows fault detection through the exchange of Hello messages.

Figure 3-9  Fault detection through the exchange of Hello messages

In Figure 3-9, an LSR periodically sends Hello messages to its neighboring LSRs to advertise its network existence and maintain adjacencies. An LSR creates a Hello timer for each neighbor to maintain an adjacency. Each time the LSR receives a Hello message, the LSR resets the Hello timer. If the Hello timer expires before the LSR receives a new Hello message, the LSR considers the adjacency terminated. Exchange of Hello messages cannot detect link faults quickly, especially when a Layer 2 device is deployed between LSRs.

BFD for LDP LSP quickly detects faults on an LDP LSP and triggers a traffic switchover upon LDP LSP failures, minimizing packet losses and improving network reliability.

Implementation

BFD for LDP LSP rapidly detects faults on LDP LSPs and notifies the forwarding plane of the fault to ensure fast traffic switchover.

The implementation process is as follows:

  1. A BFD session is bound to an LSP established between ingress and egress nodes.

  2. A BFD packet is sent from the ingress node to the egress node along an LSP.

  3. The egress node responds to the BFD packet, allowing the ingress node to quickly detect the LSP status.

  4. After BFD detects an LSP failure, it notifies the forwarding plane.

  5. The forwarding plane switches traffic to the backup LSP.

The following figure shows quick fault detection using BFD for LDP LSP.

Figure 3-10  BFD for LDP LSP

Synchronization Between LDP and IGP

Synchronization between LDP and IGP ensures consistent IGP and LDP traffic by suppressing IGP route advertisement. This minimizes packet loss and improves network reliability.

Background
Because LDP convergence is slower than IGP route convergence, the following problems occur on an MPLS network where primary and backup links exist:
  • When a primary link fails, both the IGP route and LSP are switched to backup link (through LDP FRR). After the primary link recovers, the IGP route is switched to the original primary link before LDP convergence completes. As a result, traffic is dropped during attempts to use the unreachable LSP.
  • When an IGP route of the primary link is reachable and an LDP session between nodes on the primary link fails, traffic is directed using the IGP route of the primary link, while the LSP over the primary link is torn down. Since a preferred IGP route of the backup link is unavailable, an LSP over the backup link cannot be established, causing traffic loss.
  • When the primary/backup switchover occurs on a node, the LDP session is established after IGP GR completion. IGP advertises the maximum cost of the link, causing route flapping.

Synchronization between LDP and IGP helps prevent traffic loss caused by these problems.

Related Concepts
Synchronization between LDP and IGP involves three timers:
  • Hold-down timer: controls the period of time before establishing IGP neighbor relationships.

  • Hold-max-cost timer: controls the interval for advertising the maximum link cost on an interface.

  • Delay timer: controls the period of time before LSP establishment.

Implementation
  • Figure 3-11 shows the implementation of switching between primary/backup links.

    Figure 3-11  Switching between primary/backup links

    Synchronization between LDP and IGP is implemented as follows:

    • The primary link recovers from a physical fault.
      1. The faulty link between LSR_2 and LSR_3 recovers.

      2. An LDP session is set up between LSR_2 and LSR_3. IGP starts the Hold-down timer to suppress establishment of the neighbor relationship.

      3. Traffic keeps traveling through the backup LSP.

      4. After the link fault is rectified, LSR2 and LSR3 discover each other as LDP peers and reestablish an LDP session (along the path LSR2 -> LSR4 -> LSR5 -> LSR3). LSR2 and LSR3 send a Label Mapping message to each other to establish an LSP and instruct IGP to start synchronization.

      5. IGP establishes a neighbor relationship and switches traffic back to the primary link. The LSP is reestablished and its route converges on the primary link.

    • IGP on the primary link is normal and the LDP session is Down.
      1. An LDP session between nodes along the primary link becomes Down.

      2. LDP notifies the primary link of the session fault. IGP starts the Hold-max-cost timer and advertises the maximum cost on the primary link.

      3. The IGP route of the backup link becomes reachable.

      4. An LSP is established over the backup link and the LDP module on LSR_2 delivers forwarding entries.

      The Hold-max-cost timer is configured to always advertise the maximum cost of the primary link. This allows traffic to continue through the backup link before the LDP session over the primary link is reestablished.
  • Figure 3-12 shows synchronization between LDP and IGP upon a primary/backup switchover on a node.
    Figure 3-12  Primary/backup switchover on a node

    Synchronization between LDP and IGP is implemented as follows:

    1. An IGP on the GR Restarter advertises the actual cost of the primary link and starts the GR Delay timer. The GR Restarter does not end the GR process before the GR Delay timer expires. An LDP session is established during this period.

    2. Before the GR Delay timer expires, the GR Helper retains the original IGP route and the LSP. If the LDP session goes Down, LDP does not notify the IGP link that the session is Down. In this case, IGP still advertises the actual link cost, ensuring the IGP route is not switched to the backup link. If the GR Delay timer expires, GR is complete. If the LDP session is not established, IGP starts the Hold-max-cost timer and advertises the maximum cost of the primary link, so the IGP route is switched to the backup link.

    3. If the LDP session is established or the Hold-max-cost timer expires, IGP resumes the actual link cost of the interface and then switches the IGP route back to the primary link.

LDP FRR

LDP fast reroute (FRR) provides link backup on an MPLS network. When the primary LSP fails, traffic is quickly switched to the backup LSP, minimizing traffic loss.

Background

On an MPLS network, when the primary link fails, IP FRR ensures fast IGP route convergence and switches traffic to the backup link. However, a new LSP needs to be established, which causes traffic loss. If the LSP fails (for some reason other than a primary link failure), traffic is restored until a new LSP is established, causing traffic interruption for a long time. LDP FRR is used on an MPLS network to address these issues.

LDP FRR, using the liberal label retention mode of LDP, obtains a liberal label, assigns a forwarding entry to the label, and delivers the forwarding entry to the forwarding plane as the backup forwarding entry for the primary LSP. When the interface goes Down (detected by the interface itself or by BFD) or the primary LSP fails (detected by BFD), traffic is quickly switched to the backup LSP.

Concepts
LDP FRR protects LSPs in two modes:
  • Manual LDP FRR: The outbound interface and next hop of the backup LSP must be specified using a command. When the source of the liberal label matches the outbound interface and next hop, a backup LSP can be established and its forwarding entry can be delivered.

  • Auto LDP FRR: This automatic approach depends on IP FRR. A backup LSP can be established and its forwarding entry can be delivered only when the source of the liberal label matches the backup route. That is, the liberal label is obtained from the outbound interface and next hop of the backup route, the backup LSP triggering conditions are met, and there is no backup LSP manually configured based on the backup route. By default, LDP LSP setup is triggered by a 32-bit backup route.

When both Manual LDP FRR and Auto LDP FRR meet the establishment conditions, Manual LDP FRR backup LSP is established preferentially.

Implementation

In liberal label retention mode, an LSR can receive a Label Mapping message of an FEC from any neighboring LSR. However, only the Label Mapping message sent by the next hop of the FEC can be used to generate a label forwarding table for LSP setup. In contrast, LDP FRR can generate an LSP as the backup of the primary LSP based on Label Mapping messages that are not from the next hop of the FEC. Auto LDP FRR establishes a forwarding entry for the backup LSP and adds the forwarding entry to the forwarding table. If the primary LSP fails, traffic is switched to the backup LSP quickly to minimize traffic loss.

Figure 3-13  LDP FRR - triangle topology

In Figure 3-13, the optimal route from LSR_1 to LSR_2 is LSR_1-LSR_2. A suboptimal route is LSR_1-LSR_3-LSR_2. After receiving a label from LSR_3, LSR_1 compares the label with the route from LSR_1 to LSR_2. Because LSR_3 is not the next hop of the route from LSR_1 to LSR_2, LSR_1 stores the label as a liberal label. If a route is available for the source of the liberal label, LSR_1 assigns a forwarding entry to the liberal label as the backup forwarding entry, and then delivers this forwarding entry to the forwarding plane with the primary LSP. In this way, the primary LSP is associated with the backup LSP.

LDP FRR is triggered when an interface failure is detected by the interface itself or BFD, or a primary LSP failure is detected by BFD. After LDP FRR is complete, traffic is switched to the backup LSP using the backup forwarding entry. Then the route is converged from LSR_1-LSR_2 to LSR_1-LSR_3-LSR_2. An LSP is established on the new path (the original backup LSP) and the original primary LSP is deleted. Traffic is forwarded along the new LSP of LSR_1-LSR_3-LSR_2.

Usage Scenario

Figure 3-13 shows a typical application environment of LDP FRR. LDP FRR functions well in a triangle topology but may not take effect in some situations in a rectangle topology.

Figure 3-14  LDP FRR - rectangle topology

As shown in Figure 3-14, if the optimal route from LSR_1 to LSR_4 is LSR_1-LSR_2-LSR_4 (with no other route for load balancing), LSR_3 receives a liberal label from LSR_1 and is bound to LDP FRR. If the link between LSR_3 and LSR_4 fails, traffic is switched to the route of LSR_3-LSR_1-LSR_2-LSR_4. No loop occurs in this situation.

However, if optional routes from LSR_1 to LSR_4 are available for load balancing (LSR_1-LSR_2-LSR_4 and LSR_1-LSR_3-LSR_4), LSR_3 may not receive a liberal label from LSR_1 because LSR_3 is a downstream node of LSR_1. Even if LSR_3 receives a liberal label and is configured with LDP FRR, traffic may still be forwarded to LSR_3 after the traffic switching, leading to a loop. The loop exists until the route from LSR_1 to LSR_4 is converged to LSR_1-LSR_2-LSR_4.

LDP GR

LDP Graceful Restart (GR) ensures uninterrupted traffic transmission during a protocol restart or a primary/backup switchover because the forwarding plane is separated from the control plane.

Background

On an MPLS network, when the GR Restarter restarts a protocol or performs a primary/backup switchover, label forwarding entries on the forwarding plane are deleted, interrupting data forwarding.

LDP GR addresses this issue and therefore improves network reliability. During a protocol restart or primary/backup switchover, LDP GR retains label forwarding entries because the forwarding plane is separated from the control plane. The switch still forwards packets based on the label forwarding entries, ensuring data transmission. After the protocol restart or primary/backup switchover is complete, the GR Restarter can restore to the original state with the help of the GR Helper.

Related Concepts
LDP GR is a high-reliability technology based on non-stop forwarding (NSF). The GR process involves GR Restarter and GR Helper devices:
  • GR Restarter has GR capability.
  • GR Helper is a GR-capable neighbor of the GR Restarter.
LDP GR uses the following timers:
  • Forwarding State Holding timer: specifies the duration of the LDP GR process.
  • Reconnect timer: controls the time the GR Helper waits for LDP session reestablishment. After a protocol restart or primary/backup switchover occurs on the GR Restarter, the GR Helper detects the LDP session as Down. The GR Helper then starts this timer to wait for the LDP session to be reestablished.
  • Recovery timer: controls the time the GR Helper waits for LSP recovery. After the LDP session is reestablished, the GR Helper starts this timer to wait for the LSP to recover.
Implementation

Figure 3-15 shows LDP GR implementation.

Figure 3-15  LDP GR implementation

The implementation of LDP GR is as follows:

  1. An LDP session is set up between the GR Restarter and GR Helper. The GR Restarter and GR Helper negotiate GR capabilities during LDP session setup.
  2. When restarting a protocol or performing a primary/backup switchover, the GR Restarter starts the Forwarding State Holding timer, retains label forwarding entries, and sends an LDP Initialization message to the GR Helper. When the GR Helper detects that the LDP session with the GR Restarter is Down, it retains label forwarding entries of the GR Restarter and starts the Reconnect timer.
  3. After the protocol restart or primary/backup switchover, the GR Restarter reestablishes an LDP session with the GR Helper. If an LDP session is not reestablished before the Reconnect timer expires, the GR Helper deletes label forwarding entries of the GR Restarter.
  4. After the GR Restarter reestablishes an LDP session with the GR Helper, the GR Helper starts the Recovery timer. Before the Recovery timer expires, the GR Restarter and GR Helper exchange Label Mapping messages over the LDP session. The GR Restarter and GR Helper then restore forwarding entries with each other's help. After the Recovery timer expires, the GR Helper deletes all forwarding entries that have not been restored.
  5. After the Forwarding State Holding timer expires, the GR Restarter deletes label forwarding entries and completes the implementation process.

LDP NSR

Implementation of LDP NSR

The non-stop routing (NSR) technology is an innovation based on the non-stop forwarding (NSF) technology, whereas is naturally different from NSF. If a software or hardware fault occurs on the control plane, NSR ensures the uninterrupted forwarding and the uninterrupted connection on the control plane. In addition, the control plane of a neighbor does not detect the fault.

LDP NSR is implemented through synchronization between the master control board and slave control board. When being started, the slave control board backs up data of the master board in batches to ensure data consistency on both boards at this stage. LDP NSR simultaneously notifies the master and slave control boards of receiving packets in real time. The slave control board synchronizes data with the master board. NSR then ensures that after the switchover, the slave board fast takes over services on the original master board, whereas the neighbor does not detect the fault on the local router.

LDP NSR synchronizes the following key data between the master control board and slave control board:
  • LSP forwarding entries

  • Cross connect (XC) information, used to describe the cross connection between a forwarding equivalence class (FEC) and an LSP

  • Labels, including the following types:

    • LDP LSP labels on a public network

    • Labels of VCs in Martini mode in a VLL networking

    • Labels of VCs in Martini mode in a VPLS networking

    • PW labels used by dynamic PWs in a PWE3 networking

  • LDP protocol control blocks

Translation
Download
Updated: 2019-04-08

Document ID: EDOC1100065745

Views: 20173

Downloads: 14

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next