No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


NE40E V800R010C10SPC500 Feature Description - Segment Routing 01

This is NE40E V800R010C10SPC500 Feature Description - Segment Routing

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).


SR-Traffic Engineering (SR-TE) is a new Multiprotocol Label Switching (MPLS) Traffic Engineering (TE) tunneling technique implemented based on an Interior Gateway Protocol (IGP) extension. The controller calculates a path for an SR-TE tunnel and forwards a computed label stack to the ingress configured on a forwarder. The ingress uses the label stack to generate an LSP in the SR-TE tunnel. Therefore, the label stack is used to control the path along which packets are transmitted on a network.

SR-TE Advantages

SR-TE tunnels are capable of meeting the rapid development requirements of software-defined networking (SDN), which Resource Reservation Protocol-TE (RSVP-TE) tunnels are unable to meet.Table 2-21 describes the comparison between SR-TE and RSVP-TE.

Table 2-21 Comparison between SR-TE and RSVP-TE tunnels




Label allocation

The extended IGP assigns and distributes labels. Each link is assigned only a single label, and all LSPs share the label, which reduces resource consumption and maintenance workload of label forwarding tables.

MPLS allocates and distributes labels. Each LSP is assigned a label, which consumes a great number of labels resources and results in heavy workloads maintaining label forwarding tables.

Control plane

An IGP is used, which reduces the number of protocols to be used.

RSVP is used, and the control plane is complex.


High scalability. Tunnel information is carried in packets, so an intermediate device cannot discern an SR-TE tunnel. This eliminates the need to maintain tunnel status information. Forwarding entries are only maintained, rendering high scalability.

Poor scalability. It needs to maintain the tunnel status information and also needs to maintain the forwarding entries.

Path adjustment and control

A service path can be controlled by operating a label only on the ingress. Configurations do not need to be delivered to each node, which improves programmability.

When a node in the path fails, the controller recalculates the path and updates the label stack of the ingress node to complete the path adjustment.

Whether it is a normal service adjustment or a passive path adjustment of a fault scenario, the configurations must be delivered to each node.

Related Concepts

Label Stack

A label stack is a set of Adjacency Segment labels in the form of a stack stored in a packet header. Each Adjacency SID label in the stack identifies an adjacency to a local node, and the label stack describes all adjacencies along an SR-TE LSP. In packet forwarding, a node searches for an adjacency mapped to each Adjacency Segment label in a packet, removes the label, and forwards the packet. After all labels are removed from the label stack, the packet is sent out of an SR-TE tunnel.

Stick Label and Stick Node

If a label stack depth exceeds that supported by a forwarder, the label stack cannot carry all adjacency labels on a whole LSP. In this situation, the controller assigns multiple label stacks to the forwarder. The controller delivers a label stack to an appropriate node and assigns a special label to associate label stacks to implement segment-based forwarding. The special label is a stitching label, and the appropriate node is a stitching node.

The controller assigns a stitching label at the bottom of a label stack to a stitching node. After a packet arrives at the stitching node, the stitching node swaps a label stack associated with the stitching label based on the label-stack mapping. The stitching node forwards the packet based on the label stack for the next segment.

Topology Collection and Label Allocation

Network Topology Collection Modes

Network topology information is collected in either of the following modes:

  • A forwarder runs IGP to collect network topology information and report the information to the controller.

  • Both the controller and forwarders run IGP. Each forwarder floods network topology information to one another. Each forwarder reports the information to the controller.

Label Allocation Modes

A forwarder runs an IGP to assign labels and runs a BGP LS to report label information to a controller. SR-TE mainly uses adjacency labels (adjacency segment), and node labels can also be used. Adjacency labels are assigned by the ingress. They are valid locally and unidirectional. The node labels are manually configured and globally valid. Adjacency labels and node labels are advertised using IGP. In Figure 2-33, adjacency label 9003 identifies the PE1-to-P3 adjacency and is assigned by PE1. Adjacency label 9004 identifies the P3-to-PE1 adjacency and is assigned by P3.

Figure 2-33 IGP label assignment

IGP SR is enabled on PE1, PE2, and P1 through P4 to establish IGP neighbor relationships between each pair of directly connected nodes. In SR-capable IGP instances, each outbound IGP interface is assigned an SR Adjacency Segment label. SR IGP advertises the Adjacency Segment labels across a network. P3 is used as an example. In Figure 2-33, IGP-based label allocation is as follows:

  1. P3 runs IGP to apply for a local dynamic label for an adjacency. For example, P3 assigns adjacency label 9002 to the P3-to-P4 adjacency.
  2. P3 runs IGP to advertise the adjacency label and flood it across the network.
  3. P3 uses the label to generate a label forwarding table.
  4. After the other nodes on the network run IGP to learn the Adjacency Segment label advertised by P3, the nodes do not generate local forwarding tables.

PE1, P1, P2, P3, and P4 assign and advertise adjacency labels in the same way as P3 does. The label forwarding table is then generated on each node. A node establishes a BGP LS neighbor relationship with the controller, generates topology information, including SR labels, and reports topology information to the controller.

SR-TE Tunnel Establishment

SR-TE Tunnel

Segment Routing Traffic Engineering (SR-TE) runs the SR protocol and uses TE constraints to create a tunnel.

Figure 2-34 SR-TE Tunnel

In Figure 2-34, a primary LSP is established along the path PE1->P1->P2->PE2, and a backup path is established along the path PE1->P3->P4->PE2. The two LSPs have the same tunnel ID of an SR-TE tunnel. The LSP originates from the ingress, passes through transit nodes, and is terminated at the egress.

SR-TE tunnel establishment involves configuring and establishing an SR-TE tunnel. Before an SR-TE tunnel is created, IS-IS neighbor relationships must be established between forwarders to implement network layer connectivity, to assign labels, and to collect network topology information. Forwarders send label and network topology information to the controller, and the controller uses the information to calculate paths. If no controller is available, enable the CSPF path computation function on the ingress of an SR-TE tunnel so that a forwarder runs CSPF to compute a path.
SR-TE Tunnel Configuration

SR-TE tunnel attributes are used to create tunnels. An SR-TE tunnel can be configured on a controller or a forwarder.

  • An SR-TE tunnel is configured on a controller.

    The controller runs NETCONF to deliver tunnel attributes to a forwarder (as shown in Figure 2-35). The forwarder runs PCEP to delegate the tunnel to the controller for management. (Upon receipt of the SR-TE tunnel configuration, the forwarder runs PCEP to delegate LSPs to the controller. The controller calculates paths, generates labels, and maintains the SR-TE tunnels.)

  • An SR-TE tunnel is manually configured on a forwarder.

    An SR-TE tunnel is manually configured on a forwarder. The forwarder delegates LSPs to the controller for management.

SR-TE Tunnel Establishment

If a service (for example, VPN) is bound to an SR-TE tunnel, a device establishes an SR-TE tunnel based on the following process, as shown in Figure 2-35.

Figure 2-35 Networking for SR-TE tunnels established using configurations that a controller runs NETCONF to deliver to a forwarder
The process of establishing an SR-TE tunnel is as follows:
  1. The controller uses SR-TE tunnel constraints and Path Computation Element (PCE) to calculate paths and combines adjacency labels into a label stack that is the calculation result.

    If the label stack depth exceeds the upper limit supported by a forwarder, the label stack can only carry some labels, and the controller needs to divide a label stack into multiple stacks for an entire path.

    In Figure 2-35, the controller calculates a path PE1->P3->P1->P2->P4->PE2 for an SR-TE tunnel. The path is mapped to two label stacks {1003, 1006, 100} and {1005, 1009, 1010}. Label 100 is a stitching label, and the others are adjacency labels.

  2. The controller delivers the tunnel configuration information and label stack to the forwarder through NETCONF and PCEP, respectively.

    In Figure 2-35, the process of delivering label stacks on the controller is as follows:
    1. The controller delivers label stack {1005, 1009, 1010} to P1 and assigns a stitching label of value 100 associated with the label stack. Label 100 is the bottom label in the label stack on PE1.
    2. The controller delivers label stack {1003, 1006, 100} to the ingress PE1.
  3. The forwarder uses the delivered tunnel configurations and label stacks to establish an LSP for an SR-TE tunnel.

An SR-TE tunnel does not support MTU negotiation. Therefore, the MTUs configured on nodes along the SR-TE tunnel must be the same. If an SR-TE tunnel is created manually, set an MTU value on the tunnel interface or use the default MTU of 1500 bytes. On the manual SR-TE tunnel, the smallest value in the following values takes effect: MTU of the tunnel, MPLS MTU of the tunnel, MTU of the outbound interface, and MPLS MTU of the outbound interface.

SR-TE Data Forwarding

A forwarder operates a label in a packet based on the label stack mapped to the SR-TE LSP, searches for an outbound interface hop by hop based on the top label of the label stack, and uses the label to guide the packet to the tunnel destination address.

SR-TE Data Forwarding (Adjacency)
In Figure 2-36, an example is provided to describe the process of forwarding SR-TE data with manually specified adjacency labels.
Figure 2-36 SR-TE data packet forwarding (based on adjacency labels)
In Figure 2-36, the SR-TE path calculated by the controller is A -> B -> C -> D -> F -> E. The path is mapped to two label stacks {1003, 1006, 100} and {1005, 1009, 1010}. The two label stacks are delivered to ingress A and stitching node C, respectively. Label 100 is a stitching label and is associated with label stack {1005, 1009, 1010}. The other labels are adjacency labels. Process of forwarding data packets along an SR-TE tunnel is shown as following:
  1. The ingress A adds a label stack of {1003, 1006, 100}. The ingress A uses the outer label of 1003 in the label stack to match against an adjacency and finds A-B adjacency as an outbound interface. The ingress A strips off label 1003 from the label stack {1003, 1006, 100} and forwards the packet downstream through A-B outbound interface.

  2. Node B uses the outer label of 1006 in the label stack to match against an adjacency and finds B-C adjacency as an outbound interface. Node B strips off label 1006 from the label stack {1006, 100}. The pack carrying the label stack {100} travels through the B-to-C adjacency to the downstream node C.

  3. After stitching node C receives the packet, it identifies stitching label 100 by querying the stitching label entries, swaps the label for the associated label stack {1005, 1009, 1010}. Stitching node C uses the top label 1005 to search for an outbound interface connected to the C-to-D adjacency and removes label 1005. Stitching node C forwards the packet carrying the label stack {1009, 1010} along the C-to-D adjacency to the downstream node D. For more details about stick label and stick node, see SR-TE.

  4. After nodes D and E receive the packet, they treat the packet in the same way as node B. Node E removes the last label 1010 and forwards the data packet to node F.

  5. Egress F receives the packet without a label and forwards the packet along a route that is found in a routing table.

The preceding information shows that after adjacency labels are manually specified, devices strictly forward the data packets hop by hop along the explicit path designated in the label stack. This forwarding method is also called strict explicit-path SR-TE.

SR-TE Data Forwarding (Node+Adjacency)

SR-TE in strict path mode does not support load balancing if equal-cost paths exist. To overcome these drawbacks, node labels are introduced to SR-TE paths.

The node+adjacency mixed label stack can be manually specified. With this stack used, the inter-node node labels can be set. The controller runs PCEP or NETCONF to deliver the stack to the forwarder ingress, and forwarders use the label stack to forward packets through outbound interfaces to the destination IP address of an LSP.
Figure 2-37 SR-TE forwarding principles (node+adjacency)
On the network shown in Figure 2-37, a node+adjacency mixed label stack is configured. On the ingress node A, the mixed label stack is {1003, 1006, 1005, 101}. Labels 1003, 1006 and 1005 are adjacency labels, and label 101 is a node label.
  1. Node A finds an A-B outbound interface based on label 1003 on the top of the label stack. Node A removes label 1003 and forwards the packet to the next hop node B.

  2. Similar to node A, node B finds the outbound interface mapped to label 1006 on the top of the label stack. Node B removes label 1006 and forwards the packet to the next hop node C.

  3. Similar to node A, node C finds the outbound interface mapped to label 1005 on the top of the label stack. Node C removes label 1006 and forwards the packet to the next hop node D.

  4. Node D processes label 101 on the top of the label stack. This label is to perform load balancing. Traffic packets are balanced on links based on 5-tuple information.

  5. After receiving node label 101, nodes E and G that are at the penultimate hops removes labels and forwards packets to node F to complete the E2E traffic forwarding.

The preceding information shows that after adjacency and node labels are manually specified, a device can forward the data packets along the shortest path or load-balance the data packets over paths. The paths are not fixed, and therefore, this forwarding method is called loose explicit-path SR-TE.

SR-TE Tunnel Reliability

SR-TE tunnel reliability techniques include hot standby (HSB) and TE FRR, in addition to TI-LFA FRR.

SR-TE Hot Standby

HSB indicates that once a primary LSP is established, an HSB LSP is established immediately. The HSB LSP remains in the hot backup state. The HSB LSP protects an entire LSP and is an E2E traffic protection measure.

In Figure 2-38, HSB is configured on the ingress A. After the ingress A creates the primary LSP, the ingress A immediately creates an HSB LSP. An SR-TE tunnel contains two LSPs. If the ingress detects a primary LSP fault, the ingress switches traffic to the HSB LSP. After the primary LSP recovers, the ingress A switches traffic back to the primary LSP. During the process, the SR-TE tunnel remains Up.

Figure 2-38 SR-TE HSB networking

TE FRR provides link and node protection for an SR-TE tunnel. Unlike SR-TE HSB, TE FRR is a local protection technique and a temporary protection measure that allows for quick response. Therefore, TE FRR poses high requirements on time.

A link or node failure in an SR-TE tunnel triggers a primary/backup LSP switchover. During the switchover, IGP routes converge to a backup LSP, and CSPF recalculates a path over which the primary LSP is reestablished. As a result, traffic is dropped during this process.

TE FRR can be used to minimize traffic loss. TE FRR establishes a backup path that excludes faulty links or nodes. The backup path can rapidly take over traffic, minimizing traffic loss. In addition, the ingress attempts to reestablish the primary CR-LSP.

In Figure 2-39, TE FRR is configured on node B, and an FRR bypass LSP is established in advance. If node B uses BFD to detect a primary LSP failure, TE FRR switching is triggered so that traffic is rapidly switched to the FRR bypass LSP, which reduces packet loss.

Figure 2-39 SR-TE FRR protection
SR-TE Hot Standby and SR-TE FRR Used Together

SR-TE HSB provides E2E protection for an entire LSP, whereas SR-TE FRR provides node or link protection. The two protection functions can be used together to improve reliability.

In Figure 2-40, an SR-TE tunnel is established between the UPE and PE. To improve network reliability, SR-TE FRR and HSB are configured.
Figure 2-40 Networking with SR-TE HSB and SR-TE FRR used together

If the primary LSP fails (node 202 in Figure 2-40, for example), BFD detects the fault and node 101 performs TE FRR link switching to switch traffic to an FRR bypass LSP.

After the route converges on the UP, node 202 is removed from the topology. The UPE deletes the primary LSP and switches traffic to an HSB LSP. After BFD detects the primary LSP recovery, the UPE switches traffic back to the primary LSP.


SR-TE does not use a protocol. Once a label stack is delivered to an SR-TE node, the node establishes an SR-TE LSP. The LSP does not encounter the protocol Down state, except for the situation when the label stack is withdrawn. Therefore, BFD must be used to monitor faults in the SR-TE LSP. A fault detected by BFD triggers a primary/backup SR-TE LSP switchover. BFD for SR-TE is an E2E rapid detection mechanism that rapidly detects faults in links of an SR-TE tunnel. BFD for SR-TE modes are as follows:
  • BFD for SR-TE LSP: SR-TE LSPs rely on BFD for link detection. If a BFD session has not been established when an SR-TE LSP is created, the LSP remains Down. To prevent traffic loss in the case of a primary SR-TE LSP failure, BFD for SR-TE LSP can be configured, but a backup LSP must be available. BFD for SR-TE LSP supports both static and dynamic BFD sessions:
    • Static BFD session: The local and remote discriminators are manually specified. The local discriminator of the local node must be equal to the remote discriminator of the remote node. The remote discriminator of the local node must be equal to the local discriminator of the remote node. A discriminator inconsistency causes a failure to establish a BFD session. After the BFD session is established, the interval at which BFD packets are received and the interval at which BFD packets are sent can be modified.
    • Dynamic BFD session: The local and remote discriminators do not need to be manually specified. After the SR-TE tunnel goes Up, a BFD session is triggered. The devices on both ends of a BFD session to be established negotiate the local discriminator, remote discriminator, interval at which BFD packets are received, and interval at which BFD packets are sent.

    A BFD session is bound to an SR-TE LSP. This means that a BFD session is established between the ingress and egress. A BFD packet is sent by the ingress and forwarded to the egress through an LSP. The egress responds to the BFD packet. A BFD session on the ingress can rapidly detect the status of the path through which the LSP passes.

    If a link fault is detected, the BFD module notifies the forwarding plane of the fault. The forwarding plane searches for a backup SR-TE LSP and switches traffic to the backup SR-TE LSP.

  • BFD for SR-TE tunnel: BFD for SR-TE tunnel must be used with BFD for SR-TE LSP.
    • BFD for SR-TE LSP controls the status of the primary/backup LSP switchover. BFD for SR-TE tunnel checks actual status of tunnels.
      • If BFD for SR-TE tunnel is not configured, the default tunnel status keeps Up, and the effective status cannot be determined.

      • If BFD for SR-TE tunnel is configured and the BFD status is set to administrative Down, the BFD session does not work, and the tunnel interface status is unknown.

      • BFD for SR-TE tunnel is configured and the BFD status is not set to administrative Down, the tunnel interface status is inconsistent with the BFD status.

    • The interface status of an SR-TE tunnel keeps consistent with the status of BFD for SR-TE tunnel. The BFD session goes Up slowly because of BFD negotiation. If a new label stack is delivered for a tunnel in the Down state and the BFD for this tunnel goes Up, the process takes more than 10 seconds. As a result, hard tunnel convergence is delayed if no protection is enabled for the tunnel.

  • BFD for SR-TE (one-arm mode): A Huawei device on the ingress cannot use BFD for SR-TE LSP to communicate with a non-Huawei device on the egress. In this situation, no BFD session can be established. In this case, one-arm BFD for SR-TE can be used.

    On the ingress, enable BFD and specify the one-arm mode to establish a BFD session. After the BFD session is established, the ingress sends BFD packets to the egress through transit nodes along an SR-TE tunnel. After the forwarding plane receives BFD packets, it removes MPLS labels and searches for a route matching the destination IP address of the ingress. The forwarding plane on the egress loops back the BFD packets to the ingress. The ingress processes the BFD packets. This process is the one-arm detection mechanism.

In the following example, VPN traffic is iterated to an SR-TE LSP, in the scenario of which BFD for SR-TE LSP is used.

Figure 2-41 BFD for SR-TE

A, CE1, CE2, and E are deployed on the same VPN, and CE2 advertises a route to E. PE2 assigns the VPN label to E. PE1 installs the route to E and the VPN label. The path of the SR-TE tunnel from PE1 to PE2 is PE1 -> P4 -> P3 -> PE2, and the label stack is {9004, 9003, 9005}. When A sends a packet destined for E, PE1 finds the packet's outbound interface based on label 9004 and adds label 9003, label 9005, and the inner VPN label assigned by PE2. Configure BFD to monitor the SR-TE tunnel. If BFD enters the DetectDown state, the VPN is iterated to another SR-TE tunnel.

Updated: 2019-01-03

Document ID: EDOC1100055048

Views: 7150

Downloads: 74

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Previous Next