NetEngine 8000 X V800R023C00SPC500 Configuration Guide

Segment Routing MPLS Configuration

Segment Routing MPLS Configuration

Segment Routing MPLS Description

Overview of Segment Routing MPLS

Definition

Segment Routing (SR) is a protocol designed to forward data packets using the source routing model. Segment Routing MPLS (SR-MPLS) is implemented based on the MPLS forwarding plane and is referred to as SR hereinafter. SR divides a network path into several segments and allocates a segment ID (SID) to each segment and forwarding node. The segments and nodes are then sequentially arranged into a segment list to form a forwarding path.

SR encapsulates segment list information that identifies a forwarding path into the packet header for transmission. After receiving a packet, the receive end parses the segment list. If the SID at the top of the segment list identifies the local node, the node removes the SID and executes the follow-up procedure. If the SID at the top does not identify the local node, the node forwards the packet to the next hop in equal cost multiple path (ECMP) mode.

Purpose

Network services are becoming more diverse and impose different requirements on the network. For example, real-time teleconference and live video broadcast applications typically require paths with low latency and jitter, whereas big data applications typically require high-bandwidth channels with a low packet loss rate. In this situation, it is ineffective to adapt networks to services, as this approach makes network deployment and maintenance more complex and cannot keep up with rapid service development.

The solution to this issue is to enable services to drive network development and define the network architecture. Specifically, once applications raise service requirements (such as latency, bandwidth, and packet loss rate requirements), a controller is used to collect network topology, bandwidth usage, latency, and other required information, and then computes explicit paths based on these requirements.
Figure 1-2493 Service-driven network
Against this backdrop, SR was introduced to help easily define an explicit path. With SR, nodes on a network only need to maintain SR information, which enables the network to keep up with real-time and rapid service development. SR has the following characteristics:
  • Extends existing protocols (for example, IGP) to facilitate network evolution.
  • Supports both centralized controller-based control and distributed forwarder-based control, providing a balance between the two control modes.
  • Enables networks to quickly interact with upper-layer applications through the source routing technology.

Benefits

SR offers the following benefits:
  • Simplified control plane of the MPLS network

    SR uses a controller or an IGP to uniformly compute paths and allocate labels, without requiring tunneling protocols such as RSVP-TE and LDP. In addition, it can be directly used in the MPLS architecture, without requiring any changes to the forwarding plane.

  • Efficient Topology-Independent Loop-free Alternate (TI-LFA) FRR protection for fast recovery of path failures

    On the basis of remote loop-free alternate (RLFA) FRR, SR provides TI-LFA FRR, which offers node and link protection for all topologies and addresses the weaknesses in conventional tunnel protection technologies.

  • Higher network capacity expansion capabilities

    MPLS TE is a connection-oriented technology that requires devices to exchange and process numerous packets in order to maintain connection states, burdening the control plane. In contrast, SR can control any service path by performing label operations for packets on only the ingress. Because SR does not require transit nodes to maintain path information, it frees up the control plane.

    Moreover, the SR label quantity on a network is the sum of the node quantity and local adjacency quantity, meaning that it is related only to the network scale, rather than the tunnel quantity or service volume.

  • Smoother evolution to SDN networks

    Because SR is designed based on the source routing concept, the ingress controls packet forwarding paths. Moreover, SR can work with the centralized path computation module to flexibly and easily control and adjust paths.

    Given that SR supports both legacy and SDN networks and is compatible with existing devices, it enables existing networks to evolve smoothly to SDN networks in a non-disruptive manner.

Understanding Segment Routing MPLS

Segment Routing MPLS Fundamentals

Basic Concepts
Segment Routing MPLS involves the following concepts:
  • Segment Routing (SR) domain: a set of SR nodes.

  • Segment ID (SID): unique identifier of a segment. A SID can be mapped to an MPLS label in the forwarding plane.

  • Segment Routing global block (SRGB): a set of user-specified global labels reserved for SR-MPLS.

  • Segment Routing local block (SRLB): a set of user-specified local labels reserved for Segment Routing MPLS. These labels are configured and effective only on the local device. However, they are advertised through an IGP. Therefore, they are globally visible. Currently, the SRLB is mainly used to configure binding SIDs.
Segment Classification
Table 1-1017 Segment classification

Category

Generation Mode

Function

Prefix segment

Manually configured

Identifies the prefix of a destination address.

Prefix segments are propagated to other devices through an IGP. They are visible to and effective on all the devices.

Each prefix segment is identified by a prefix segment ID (SID).

The process of generating a prefix SID is as follows:

  1. A prefix SID offset (SID index) in the SRGB is configured on the source node and then advertised through an IGP.
  2. After receiving the prefix SID offset, the local device generates an MPLS forwarding entry, in which the incoming label is calculated based on the local SRGB and prefix SID offset and the outgoing label is calculated based on the SRGB and prefix SID offset advertised by the next-hop neighbor.

Adjacency segment

Dynamically allocated by the source node through a protocol or manually configured

Identifies an adjacency on a network.

Adjacency segments are propagated to other devices through an IGP. They are visible to all the devices but effective only on the local device.

Each adjacency segment is identified by an adjacency SID, which is a local SID out of the SRGB.

Node segment

Manually configured

Identifies a specific node. Node segments are special prefix segments. When an IP address is configured as a prefix for a loopback interface of a node, the prefix SID of the node is the node SID.

Figure 1-2494 shows prefix, adjacency, and node SID examples.
Figure 1-2494 Prefix, adjacency, and node SID examples

In general, a prefix SID identifies a destination address, and an adjacency SID identifies a link for outgoing data packets. The prefix and adjacency SIDs are similar to the destination IP address and outbound interface in conventional IP forwarding, respectively. In an IGP area, devices propagate their node and adjacency SIDs using extended IGP messages, so that any device in the area can obtain information about the other devices.

Combining prefix (node) and adjacency SIDs in sequence can construct any network path. Every hop on a path identifies the next hop based on the top SID in the label stack. SID information is stacked in sequence at the top of the data header. If the top SID identifies another node, the receive node forwards the data packet to that node in ECMP mode. If the top SID identifies the local node, the receive node removes the top SID and proceeds with the follow-up procedure.

Prefix, adjacency, and node SIDs can either be used separately or be combined. They are mainly used in the following three modes:

Prefix SID

A prefix SID-based forwarding path is computed by an IGP using the SPF algorithm. In Figure 1-2495, node Z is the destination node, and its prefix SID is 100. After the prefix SID is propagated using an IGP, all the other nodes in the IGP area learn the prefix SID of node Z and then run SPF to compute their shortest paths (with the minimum cost) to node Z.
Figure 1-2495 Prefix SID-based forwarding path

If equal-cost paths exist on the network, ECMP can be implemented. If no equal-cost paths exist, link backup can be implemented. Therefore, prefix SID-based forwarding paths are not fixed, and the ingress cannot control the entire packet forwarding path.

Adjacency SID

In Figure 1-2496, an adjacency SID is allocated to each adjacency on the network, and a segment list with multiple adjacency SIDs is defined on the ingress, so that any strict explicit path can be specified. This mode facilitates SDN implementation.
Figure 1-2496 Adjacency SID-based forwarding path

Each adjacency SID corresponds to a device interface. If the adjacency corresponding to an adjacency SID fails, only end-to-end protection can be implemented. After local protection is configured for adjacency SIDs, the SIDs carry a protection flag. During path computation, the controller can select an adjacency SID with the protection flag and compute a backup path for the adjacency SID. In this way, when the adjacency corresponding to the adjacency SID fails, traffic can be rapidly switched to the backup path.

Adjacency SID + Node SID

In Figure 1-2497, adjacency and node SIDs are combined to forcibly include a specific adjacency on a path. Based on node SIDs, nodes can run SPF to compute the shortest path or establish multiple paths to load-balance traffic. In this situation, paths are not strictly fixed, and therefore, they are also called loose explicit paths.
Figure 1-2497 Adjacency SID + node SID-based forwarding path
SR Forwarding Mechanism

SR can be directly used in the MPLS architecture, without requiring any changes to the forwarding plane. SIDs are encoded as MPLS labels, and the segment list is encoded as a label stack. The segment to be processed is at the stack top. Once a segment is processed, the corresponding label is removed from the label stack.

SR based on the MPLS forwarding mechanism is also called SR-MPLS.

SR Label Conflicts and Handling Rules

Prefix SIDs are manually configured on different devices, which may result in label conflicts. Label conflicts are classified as prefix conflicts or SID conflicts. A prefix conflict indicates that the same prefix is associated with different SIDs, whereas a SID conflict indicates that the same SID is associated with different prefixes.

If label conflicts occur, prefix conflicts are handled prior to SID conflicts. Then, the following rules are observed to preferentially select a route:
  1. The route with the largest prefix mask is preferred.
  2. The route with the smallest prefix is preferred.
  3. The route with the smallest SID is preferred.
For example, label conflicts occur in the following four routes expressed in the prefix/mask length SID format:
  • a. 1.1.1.1/32 1
  • b. 1.1.1.1/32 2
  • c. 2.2.2.2/32 3
  • d. 3.3.3.3/32 1
The process of handling the label conflicts is as follows:
  1. Prefix conflicts are handled first. Routes a and b encounter a prefix conflict. Route a has a smaller SID than route b. Therefore, route a is preferred. After route b is excluded, the following three routes are left:
    • a. 1.1.1.1/32 1
    • c. 2.2.2.2/32 3
    • d. 3.3.3.3/32 1
  2. SID conflicts are then handled. Routes a and d encounter a SID conflict. Route a has a smaller prefix than route d. Therefore, route a is preferred. After route d is excluded, the following two routes are left:
    • a. 1.1.1.1/32 1
    • c. 2.2.2.2/32 3

IS-IS for SR-MPLS

SR-MPLS uses an IGP to advertise topology, prefix, Segment Routing global block (SRGB), and label information. This is achieved by extending the TLVs of protocol packets for the IGP. IS-IS mainly defines sub-TLVs for SID and NE SR-MPLS capabilities, as listed in Table 1-1018.
Table 1-1018 IS-IS sub-TLV extensions for SIDs and SR-MPLS capabilities

Name

Function

Carried In

Prefix-SID Sub-TLV

Advertises SR-MPLS prefix SIDs.

  • IS-IS Extended IPv4 Reachability TLV-135
  • IS-IS Multitopology IPv4 Reachability TLV-235
  • IS-IS IPv6 IP Reachability TLV-236
  • IS-IS Multitopology IPv6 IP Reachability TLV-237
  • SID/Label Binding TLV and so on

Adj-SID Sub-TLV

Advertises SR-MPLS adjacency SIDs on a P2P network.

  • IS-IS Extended IS reachability TLV-22
  • IS-IS IS Neighbor Attribute TLV-23
  • IS-IS inter-AS reachability information TLV-141
  • IS-IS Multitopology IS TLV-222
  • IS-IS Multitopology IS Neighbor Attribute TLV-223

Currently, it can be carried only in IS-IS Extended IS reachability TLV-22.

LAN-Adj-SID Sub-TLV

Advertises SR-MPLS adjacency SIDs on a LAN.

  • IS-IS Extended IS reachability TLV-22
  • IS-IS IS Neighbor Attribute TLV-23
  • IS-IS inter-AS reachability information TLV-141
  • IS-IS Multitopology IS Reachability TLV-222
  • IS-IS Multitopology IS Neighbor Attribute TLV-223

SID/Label Sub-TLV

Advertises SR-MPLS SIDs or MPLS labels.

SR-Capabilities Sub-TLV and SR Local Block Sub-TLV

SID/Label Binding TLV

Advertises the mapping between prefixes and SIDs.

IS-IS LSP

SR-Capabilities Sub-TLV

Advertises SR-MPLS capabilities.

IS-IS Router Capability TLV-242

SR-Algorithm Sub-TLV

Advertises the algorithm that is used. For details, see SR-MPLS Flex-Algo.

IS-IS Router Capability TLV-242

IS-IS FAD Sub-TLV

Advertises the Flex-Algo definition (FAD) of IS-IS. For details, see SR-MPLS Flex-Algo.

IS-IS Router Capability TLV-242

SR Local Block Sub-TLV

Advertises the range of labels reserved for local SIDs.

IS-IS Router Capability TLV-242

IS-IS TLV Extensions for SIDs

Prefix-SID Sub-TLV

The Prefix-SID sub-TLV carries IGP prefix SID information. Figure 1-2498 shows the format of this sub-TLV.
Figure 1-2498 Prefix-SID sub-TLV format
Table 1-1019 Fields in the Prefix-SID sub-TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 3.

Length

8 bits

Packet length.

Flags

8 bits

Flags field. Figure 1-2499 shows the format of this field.
Figure 1-2499 Flags field
In this field:
  • R: re-advertisement flag. If this flag is set, the prefix to which the prefix SID is attached may be imported from another protocol or propagated by a node of another level (for example, from IS-IS Level 1 to Level 2).
  • N: node SID flag. If this flag is set, the prefix SID refers to the node identified by the prefix. Generally, this flag is set if a loopback interface address is configured as the prefix SID.
  • P: no-PHP flag. If this flag is set, penultimate hop popping (PHP) is disabled so that the penultimate node does not remove the label of the egress before sending the packet to the egress.
  • E: explicit-null flag. If this flag is set, the explicit-null label function is enabled, requiring the upstream neighbor to replace the prefix SID with a prefix SID that has an explicit null value before forwarding the packet.
  • V: value flag. If this flag is set, the prefix SID carries a value, instead of an index. By default, the flag is not set.
  • L: local flag. If this flag is set, the value or index carried in the prefix SID is of local significance. By default, the flag is not set.

When computing the outgoing label for a packet destined for a prefix, a node must consider the P and E flags in the prefix SID advertised by the next hop, regardless of whether the optimal path to the prefix SID passes through the next hop. When propagating (from either Level-1 to Level-2 or Level-2 to Level-1) a reachability advertisement originated by another IS-IS speaker, the local node must set the P flag and clear the E flag in relevant prefix SIDs.

The following behavior is associated with the settings of P and E flags:
  • If the P flag is not set, any upstream neighbor of the prefix SID originator must remove the prefix SID. This is equivalent to the PHP mechanism used in MPLS forwarding. The MPLS EXP bits of the prefix SID are also cleared. In addition, if the P flag is not set, the received E flag is ignored.
  • If the P flag is set, then:
    • If the E flag is not set, any upstream neighbor of the prefix SID originator must keep the prefix SID on top of the stack. This is useful when, for example, the originator of the Prefix-SID must stitch the incoming packet into a continuing MPLS LSP to the final destination.
    • If the E flag is set, any upstream neighbor of the prefix SID originator must replace the prefix SID with a prefix SID having an explicit null value. In this mode, MPLS EXP bits can be reserved. If the originator of the prefix SID is the final destination for the related prefix, the packet with original EXP bits can be received. MPLS EXP bits can be used for QoS services.

Algorithm

8 bits

Algorithm that is used.
  • 0: shortest path first (SPF) algorithm
  • 1: strict SPF algorithm, which is not supported currently

SID/Index/Label (variable)

Variable

This field contains either of the following information based on the V and L flags:
  • 4-octet index defining the offset in the SID/label space. In this case, the V and L flags cannot be set.
  • 3-octet local label where the 20 rightmost bits are used for encoding the label value. In this case, the V and L flags must be set.

Adj-SID Sub-TLV

The Adj-SID sub-TLV is an optional sub-TLV carrying IGP adjacency SID information. Figure 1-2500 shows the format of this sub-TLV.
Figure 1-2500 Adj-SID sub-TLV format
Table 1-1020 Fields in the Adj-SID sub-TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 31.

Length

8 bits

Packet length.

Flags

8 bits

Flags field. Figure 1-2501 shows the format of this field.
Figure 1-2501 Flags field
In this field:
  • F: address family flag.
    • 0: IPv4
    • 1: IPv6
  • V: value flag. If the flag is set, the Adj-SID carries a label value. By default, the flag is set.
  • L: local flag. If the flag is set, the value or index carried by the Adj-SID has local significance. By default, the flag is set.
  • S: set flag. If the flag is set, the Adj-SID refers to a set of adjacencies.
  • P: persistent flag. If the flag is set, the Adj-SID is persistently allocated, for example, the Adj-SID remains unchanged regardless of a device restart or interface flapping.

Weight

8 bits

Weight of the Adj-SID for the purpose of load balancing.

SID/Index/Label (variable)

Variable

This field contains either of the following information based on the V and L flags:
  • 3-octet local label where the 20 rightmost bits are used for encoding the label value. In this case, the V and L flags must be set.
  • 4-octet index defining the offset in the SID/label space. In this case, the V and L flags cannot be set.

A designated intermediate system (DIS) is elected as a medium during IS-IS communications on a LAN. On the LAN, a node only needs to advertise one adjacency to the DIS and obtain all adjacency information from the DIS, without the need to exchange adjacency information with other nodes.

When SR is used, each node needs to advertise the Adj-SID of each of its neighbors. On the LAN, each node advertises only an IS-IS Extended IS reachability TLV-22 to the DIS and encapsulates the set of Adj-SIDs (for each of its neighbors) inside a newly defined sub-TLV: LAN-Adj-SID sub-TLV. This sub-TLV contains the set of Adj-SIDs assigned by a node to each of its LAN neighbors.

Figure 1-2502 shows the format of the LAN-Adj-SID sub-TLV.
Figure 1-2502 LAN-Adj-SID sub-TLV format

SID/Label Sub-TLV

The SID/Label sub-TLV contains a SID or an MPLS label. It is a part of the SR-Capabilities sub-TLV.

Figure 1-2503 shows the format of the SID/Label sub-TLV.
Figure 1-2503 SID/Label sub-TLV format
Table 1-1021 Fields in the SID/Label sub-TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 1.

Length

8 bits

Packet length.

SID/Label (variable)

Variable

If the Length field value is set to 3, the 20 rightmost bits indicate an MPLS label.

SID/Label Binding TLV

The SID/Label Binding TLV, which applies to SR and LDP interworking scenarios, can be used to advertise prefix-to-SID mappings.

Figure 1-2504 shows the format of the SID/Label Binding TLV.
Figure 1-2504 SID/Label Binding TLV format
Table 1-1022 Fields in the SID/Label Binding TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 1.

Length

8 bits

Packet length.

Flags

8 bits

Flags field.
+-+-+-+-+-+-+-+-+
|F|M|S|D|A|     |
+-+-+-+-+-+-+-+-+

Range

16 bits

Range of addresses and their associated prefix SIDs.

Prefix Length

8 bits

Prefix length.

Prefix

Variable

Prefix.

SubTLVs

Variable

Sub-TLVs, for example, the SID/Label sub-TLV.

IS-IS TLV Extensions for SR Capabilities

SR-Capabilities Sub-TLV

SR requires each node to advertise its SR capabilities and the range of global SIDs (or global indexes). To meet this requirement, the SR-Capabilities sub-TLV is defined and inserted into the IS-IS Router Capability TLV-242 for transmission. The SR-Capabilities sub-TLV can be propagated only within the same IS-IS level and must not be propagated across IS-IS levels.

Figure 1-2505 shows the format of the SR-Capabilities sub-TLV.
Figure 1-2505 SR-Capabilities sub-TLV format
Table 1-1023 Fields in the SR-Capabilities sub-TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 2.

Length

8 bits

Packet length.

Flags

8 bits

Flags field. Figure 1-2506 shows the format of this field.
Figure 1-2506 Flags field
In this field:
  • I: MPLS IPv4 flag. If the flag is set, SR-MPLS IPv4 packets received by all interfaces can be processed.
  • V: MPLS IPv6 flag. If the flag is set, SR-MPLS IPv6 packets received by all interfaces can be processed.

Range

24 bits

SRGB range.

For example, the originating node advertises SR-Capabilities of the following ranges:

SR-Capability 1: range: 100; SID value: 100
SR-Capability 2: range: 100; SID value: 1000
SR-Capability 3: range: 100; SID value: 500

The receiving nodes concatenate the ranges in the received order and build the SRGB as follows:

SRGB = [100, 199]
       [1000, 1099]
       [500, 599]

Different indexes may span multiple ranges.

Index 0 indicates label 100.
...
Index 99 indicates label 199.
Index 100 indicates label 1000.
Index 199 indicates label 1099.
...
Index 200 indicates label 500.
...

SID/Label Sub-TLV (variable)

Variable

For details, see SID/Label Sub-TLV. The SID/Label sub-TLV contains the first value of the involved SRGB. When multiple SRGBs are configured, ensure that the SRGB sequence is correct and the SRGBs do not overlap.

SR Local Block Sub-TLV

The SR Local Block sub-TLV contains the range of labels that a node has reserved for local SIDs. Local SIDs are used for adjacency SIDs, and may also be allocated by components other than the IS-IS protocol. For example, an application or a controller may instruct the node to allocate a specific local SID. Therefore, in order for such applications or controllers to know what local SIDs are available on the node, it is required that the node advertises its SR local block (SRLB).

Figure 1-2507 shows the format of the SR Local Block sub-TLV.
Figure 1-2507 SR Local Block sub-TLV format
Table 1-1024 Fields in the SR Local Block sub-TLV

Field

Length

Description

Type

8 bits

Unassigned. The recommended value is 2.

Length

8 bits

Packet length.

Flags

8 bits

Flags field. This field is not defined currently.

Range

8 bits

SRLB range.

SID/Label Sub-TLV (variable)

Variable

For details, see SID/Label Sub-TLV. The SID/Label sub-TLV contains the first value of the involved SRLB. When multiple SRLBs are configured, ensure that the SRLB sequence is correct and the SRLBs do not overlap.

A node advertising the SR Local Block sub-TLV may also have other label ranges, outside the SRLB, for its local allocation purposes that are not advertised in the SRLB. For example, it is possible that an adjacency SID is allocated using a local label, which is not part of the SRLB.

OSPF for SR-MPLS

SR-MPLS uses an IGP to advertise topology, prefix, Segment Routing global block (SRGB), and label information. This is achieved by extending the TLVs of protocol packets for the IGP. OSPF mainly defines TLVs and sub-TLVs for SID and NE SR-MPLS capabilities. These TLVs are carried in OSPFv2 Opaque LSAs of OSPF.

Opaque LSA Header Format
To support SR, the OSPFv2 Extended Prefix Opaque LSA and OSPFv2 Extended Link Opaque LSA are added to OSPFv2 Opaque LSAs. In addition, LSA-related TLVs are added to the originally supported OSPFv2 Router Information (RI) Opaque LSAs.
  • OSPFv2 Router Information (RI) Opaque LSA: advertises whether SR is enabled on an OSPF device.
  • OSPFv2 Extended Prefix Opaque LSA: advertises additional OSPF prefix information. It can carry the OSPFv2 Extended Prefix TLV and OSPFv2 Extended Prefix Range TLV.
  • OSPFv2 Extended Link Opaque LSA: advertises additional OSPF link information. It can carry the OSPFv2 Extended Link TLV.
Figure 1-2508 Opaque LSA header format
Table 1-1025 Fields in the Opaque LSA header

Field

Length

Description

LS age

16 bits

Time elapsed after the LSA was generated, in seconds. The value of this field continually increases regardless of whether the LSA is transmitted over a link or saved in an LSDB.

Options

8 bits

Available options:

  • DN: DN bit
  • L: LLS data block
  • MC: whether IP multicast packets are forwarded
  • E: external route
  • MT: multi-topology route

LS Type

8 bits

LSA type.

Opaque Type

8 bits

Opaque type.

Opaque ID

24 bits

Opaque ID.

Advertising Router

32 bits

ID of the router that generates the LSA.

LS sequence number

32 bits

Sequence number of the LSA. Other devices can use this field to identify the latest LSA.

LS checksum

16 bits

Checksum of all fields except the LS age field.

Length

16 bits

Length of the LSA including the LSA header, in bytes.

TLVs

Variable

TLV content, including the OSPFv2 Extended Prefix TLV and OSPFv2 Extended Link TLV.

Figure 1-2509 shows the format of the OSPFv2 Extended Prefix TLV.
Figure 1-2509 Format of the OSPFv2 Extended Prefix TLV
Table 1-1026 Fields in the OSPFv2 Extended Prefix TLV

Field

Length

Description

Type

16 bits

TLV type. The value is 1.

Length

16 bits

TLV length.

Route Type

8 bits

Route type:

  • 1: Intra-Area
  • 3: Inter-Area
  • 5: AS External

Prefix Length

8 bits

Prefix length.

AF

8 bits

Address family.

Flags

8 bits

Prefix flag.

Address Prefix

Variable

Address prefix.

Sub-TLVs

Variable

Sub-TLV type. It can be a sub-TLV that supports SR, such as the Prefix SID Sub-TLV or SID/Label Sub-TLV.

Figure 1-2510 shows the format of the OSPFv2 Extended Link TLV.

Figure 1-2510 Format of the OSPFv2 Extended Link TLV
Table 1-1027 Fields in the OSPFv2 Extended Link TLV

Field

Length

Description

Type

16 bits

TLV type. The value is 1.

Length

16 bits

TLV length.

Link Type

8 bits

Link type.

Link ID

32 bits

Link ID.

Link Data

32 bits

Link data.

Sub-TLVs

Variable

Sub-TLV type. It can be a sub-TLV that supports SR, such as the Adj-SID Sub-TLV, LAN Adj-SID Sub-TLV, or SID/Label Sub-TLV.

Table 1-1028 describes OSPFv2 extensions for SR.

Table 1-1028 OSPF TLV extensions for SR

Type

Supported TLV

Function

Supported Sub-TLVs (Only Some Examples Provided)

OSPFv2 Router Information (RI) Opaque LSA

SR-Algorithm TLV

Advertises the algorithm that is used.

-

SID/Label Range TLV

Advertises the SR SID or MPLS label range.

SID/Label Sub-TLV

SR Local Block TLV

Advertises the range of labels reserved for local SIDs.

SID/Label Sub-TLV

SRMS Preference TLV

Advertises the preference of an NE functioning as an SR mapping server.

-

OSPFv2 Extended Prefix Opaque LSA TLVs

OSPFv2 Extended Prefix TLV

Advertises additional OSPF prefix information.

  • SID/Label Sub-TLV
  • Prefix SID Sub-TLV

OSPF Extended Prefix Range TLV

Advertises the prefix range.

SID/Label Sub-TLV

OSPFv2 Extended Link Opaque LSA TLVs

OSPFv2 Extended Link TLV

Advertises additional OSPF link information.

  • SID/Label Sub-TLV
  • Adj-SID Sub-TLV
  • LAN Adj-SID Sub-TLV
SR-Algorithm TLV

A node may use different algorithms to calculate reachability to other nodes or to prefixes attached to these nodes. Examples of these algorithms are SPF and various SPF variant algorithms. The SR-Algorithm TLV allows the node to advertise the algorithms that the node is currently using.

Figure 1-2511 shows the format of the SR-Algorithm TLV.
Figure 1-2511 SR-Algorithm TLV format
Table 1-1029 Fields in the SR-Algorithm TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

Algorithm

8 bits

Algorithm that is used.

SID/Label Range TLV

The SID/Label Range TLV is used to advertise multiple SIDs or labels at a time, or a SID or label range.

Figure 1-2512 shows the format of the SID/Label Range TLV.
Figure 1-2512 SID/Label Range TLV format
Table 1-1030 Fields in the SID/Label Range TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

Range Size

24 bits

SRGB range.

Reserved

8 bits

Reserved field.

Sub-TLV (variable)

Variable length

The SID/Label Sub-TLV is mainly involved, containing the first value of the corresponding SID/label range.

This field and the Range Size field jointly determine a SID or label range.

SR Local Block TLV

The SR Local Block TLV contains the range of labels reserved by a node for local SIDs. Local SIDs are used for adjacency SIDs, and may also be allocated by other components. For example, applications or controllers may instruct a node to allocate a special local SID. The node must therefore advertise its SR local block (SRLB) so that these applications or controllers can learn what local SIDs are available on the node.

Figure 1-2513 shows the format of the SR Local Block TLV.
Figure 1-2513 Format of the SR Local Block TLV
Table 1-1031 Fields in the SR Local Block TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

Range Size

24 bits

SRLB range.

Reserved

8 bits

Reserved field.

Sub-TLV (variable)

Variable length

The SID/Label Sub-TLV is mainly involved, containing the first value of the corresponding SRLB. When multiple SRLBs are configured, ensure that the SRLB sequence is correct and the SRLBs do not overlap.

SRMS Preference TLV
The SRMS Preference TLV advertises the priority of the SR mapping server as which the local node functions. The TLV is used in Sr mapping server election. Figure 1-2514 shows the format of the SRMS Preference TLV.
Figure 1-2514 SRMS Preference TLV format
Table 1-1032 Fields in the SRMS Preference TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

4 bytes

Packet length.

Preference

8 bits

Priority of the SR mapping server.

Reserved

8 bits

Reserved field.

SID/Label Sub-TLV
The SID/Label Sub-TLV contains a SID or an MPLS label. Figure 1-2515 shows the format of the SID/Label Sub-TLV.
Figure 1-2515 SID/Label Sub-TLV format
Table 1-1033 Fields in the SID/Label Sub-TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

SID/Label (variable)

Variable length

If the Length field value is set to 3, the 20 rightmost bits indicate an MPLS label.

If the Length field value is set to 4, the field indicates a 32-bit SID.

Prefix SID Sub-TLV
The Prefix-SID Sub-TLV carries IGP-Prefix-SID information in the format shown in Figure 1-2516.
Figure 1-2516 Prefix-SID Sub-TLV format
Table 1-1034 Fields in the Prefix-SID Sub-TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

Flags

8 bits

Flags field. Figure 1-2517 shows its format.
Figure 1-2517 Flags field
The meaning of each flag is as follows:
  • NP: no-PHP flag. If this flag is set, PHP is disabled so that the penultimate node sends a labeled packet to the egress.
  • M: Mapping server flag. If the flag is set, a SID is advertised by a mapping server.
  • E: explicit-null flag. If this flag is set, the explicit null label function is enabled. An upstream neighbor must replace an existing label with an explicit null label before forwarding a packet.
  • V: value flag. If this flag is set, a prefix SID carries a value, instead of an index. By default, the flag is not set.
  • L: local flag. If this flag is set, the value or index carried in a prefix SID is of local significance. By default, the flag is not set.

When computing the outgoing label for a packet destined for a prefix, a node must consider the NP and E flags in the prefix SID advertised by the next hop. This is necessary regardless of whether the optimal path to the prefix SID passes through the next hop.

The following behavior is related to P and E flags:
  • If the NP flag is not set, any upstream node of the prefix SID producer must strip off the prefix SID, which is similar to PHP in MPLS forwarding. The MPLS EXP bit is also cleared. In addition, if the P flag is not set, the received E flag is ignored.
  • If the NP flag is set, the following situations occur:
    • If the E flag is not set, any upstream node of the prefix SID producer must reserve the prefix SID on the top of the label stack. This method is used in path stitching. For example, a prefix SID producer may use this label to forward a packet to another MPLS LSP.
    • If the E flag is set, any upstream node of the prefix SID producer must replace the prefix SID label with an explicit null label. In this mode, the MPLS EXP flag is retained. If the prefix SID producer is the destination, the node can receive the original MPLS EXP field value. The MPLS EXP flag can be used in QoS services.

Reserved

8 bits

Reserved field.

MT-ID

8 bits

Multi-topology ID.

Algorithm

8 bits

Algorithm:
  • 0: shortest path first (SPF) algorithm
  • 1: strict SPF algorithm, which is not supported currently

SID/Index/Label (variable)

Variable length

This field contains either of the following information based on the V and L flags:
  • 4-byte label offset value within the SID/label range. In this case, V and L flags are not set.
  • 3-byte local label: The 20 rightmost bits are a label value. In this case, the V and L flags must be set.
Adj-SID Sub-TLV
The Adj-SID Sub-TLV is an optional sub-TLV carrying IGP adjacency SID information. Figure 1-2518 shows the format of this sub-TLV.
Figure 1-2518 Adj-SID Sub-TLV format
Table 1-1035 Fields in the Adj-SID Sub-TLV

Field

Length

Description

Type

16 bits

TLV type value.

Length

16 bits

Packet length.

Flags

8 bits

Flags field. Figure 1-2519 shows its format.
Figure 1-2519 Flags field
The meaning of each flag is as follows:
  • V: Value/Index flag. If this flag is set, an Adj-SID carries a label value. If this flag is not set, an Adj-SID carries a relative index.
  • L: Local/Global flag. If this flag is set, the Adj-SID value or index is of local significance. If this flag is not set, the Adj-SID value or index is of global significance.
  • G: group flag. If this flag is set, an Adj-SID is an adjacency group.
  • P: permanent label. If this flag is set, an Adj-SID is a permanently assigned SID, which is unchanged, regardless of a device restart or interface flapping.

Reserved

8 bits

Reserved field.

MT-ID

8 bits

Multi-topology ID.

Weight

8 bits

Weight. The Adj-SID weight is used for load balancing.

SID/Index/Label (variable)

Variable length

This field contains either of the following information based on the V and L flags:
  • 3-byte local label: The 20 rightmost bits are a label value. In this case, the V and L flags must be set.
  • 4-byte label offset value within the SID/label range. In this case, V and L flags are not set.
LAN Adj-SID Sub-TLV

In SR scenarios, each node needs to advertise the Adj-SID of each of its neighbors. On a broadcast, NBMA, or mixed network, the LAN-Adj-SID Sub-TLV is used to send SID/label information to non-DR devices.

Figure 1-2520 shows the format of the LAN-Adj-SID Sub-TLV. Compared with the Adj-SID Sub-TLV, the LAN Adj-SID Sub-TLV has an additional Neighbor ID field, which represents the router ID of the device that advertises the LAN Adj-SID Sub-TLV.
Figure 1-2520 LAN-Adj-SID Sub-TLV format

BGP for SR-MPLS

IGP for SR-MPLS allocates SIDs only within an autonomous system (AS). Proper SID orchestration in an AS facilitates the planning of an optimal path in the AS. However, IGP for SR-MPLS does not work if paths cross multiple ASs on a large-scale network. BGP for SR-MPLS is an extension of BGP for Segment Routing. This function enables a device to allocate BGP peer SIDs based on BGP peer information and report the SID information to a controller. SR-MPLS TE uses BGP peer SIDs for path orchestration, thereby obtaining the optimal inter-AS E2E SR-MPLS TE path. BGP for SR-MPLS involves the BGP egress peer engineering (EPE) extension and BGP-LS extension.

BGP EPE
BGP EPE allocates BGP peer SIDs to inter-AS paths. BGP-LS advertises the BGP peer SIDs to the controller. A forwarder that does not establish a BGP-LS peer relationship with the controller can run BGP-LS to advertise a peer SID to a BGP peer that has established a BGP-LS peer relationship with the controller. The BGP peer then runs BGP-LS to advertise the peer SID to the network controller. As shown in Figure 1-2521, BGP EPE can allocate peer node segments (Peer-Node SIDs), peer adjacency segments (Peer-Adj SIDs), and Peer-Set SIDs to peers.
  • A Peer-Node SID identifies a peer node. The peers at both ends of each BGP session are assigned Peer-Node SIDs. An EBGP peer relationship established based on loopback interfaces may traverse multiple physical links. In this case, the Peer-Node SID of a peer is mapped to multiple outbound interfaces.

  • A Peer-Adj SID identifies an adjacency to a peer. An EBGP peer relationship established based on loopback interfaces may traverse multiple physical links. In this case, each adjacency is assigned a Peer-Adj SID. Only a specified link (mapped to a specified outbound interface) can be used for forwarding.

  • A Peer-Set SID identifies a group of peers that are planned as a set. BGP allocates a Peer-Set SID to the set. Each Peer-Set SID can be mapped to multiple outbound interfaces during forwarding. Because one peer set consists of multiple peer nodes and peer adjacencies, the SID allocated to a peer set is mapped to multiple Peer-Node SIDs and Peer-Adj SIDs.
Figure 1-2521 BGP EPE networking

On the network shown in Figure 1-2521, ASBR1 and ASBR3 are directly connected through two physical links. An EBGP peer relationship is established between ASBR1 and ASBR3 through loopback interfaces. ASBR1 runs BGP EPE to assign the Peer-Node SID 28001 to its peer (ASBR3) and to assign the Peer-Adj SIDs 18001 and 18002 to the physical links. For an EBGP peer relationship established between directly connected physical interfaces, BGP EPE allocates a Peer-Node SID rather than a Peer-Adj SID. For example, on the network shown in Figure 1-2521, BGP EPE allocates only Peer-Node SIDs 28002, 28003, and 28004 to the ASBR1-ASBR5, ASBR2-ASBR4, and ASBR2-ASBR5 peer relationships, respectively.

Peer-Node SIDs and Peer-Adj SIDs are effective only on local devices. Different devices can have the same Peer-Node SID or Peer-Adj SID. Currently, BGP EPE supports IPv4 EBGP peers, IPv4 IBGP peers, and BGP confederations.

BGP EPE allocates SIDs to BGP peers and links, but cannot be used to construct a forwarding tunnel. BGP peer SIDs must be used with IGP SIDs to establish an E2E tunnel. Currently, IPv4 SR mainly involves SR LSPs (SR-MPLS BE) and SR-MPLS TE tunnels.
  • SR LSPs are dynamically computed by forwarders based on intra-AS IGP SIDs. Because peer SIDs allocated by BGP EPE cannot be used by an IGP, inter-AS SR LSPs are not supported.

  • E2E SR-MPLS TE tunnels can be established by manually specifying an explicit path. They can also be orchestrated by a network controller provided that the specified path contains inter-AS link information.

Traffic Statistics Collection

On the network shown in Figure 1-2521, ASBR1 has two Peer-Adj SIDs (18001 and 18002) to ASBR3, one Peer-Node SID (28001) to ASBR3, and one Peer-Node SID (28002) to ASBR5. You can collect statistics about the traffic forwarded over the segment list containing the specified EPE SID. For example, statistics about the traffic forwarded over the outbound interface to which SID 18001 (from ASBR1 to ASBR3) points can be collected. You can also collect statistics about the traffic forwarded to a specified peer. For example, statistics about all traffic forwarded through EPE SIDs from ASBR1 to ASBR3 can be collected, including statistics about all traffic forwarded through BGP EPE SIDs 18001, 18002, and 28001.

BGP EPE is typically used to orchestrate SIDs at inter-AS egresses. Collecting traffic statistics using BGP EPE facilitates monitoring egress traffic in an AS.

Fault Association

On the network shown in Figure 1-2522, intra-AS SR-MPLS TE tunnels are deployed for AS 65001, and SBFD for SR-MPLS TE tunnel is configured. BGP EPE is configured among ASBR1, ASBR2, and ASBR3.

Two inter-AS SR-MPLS TE tunnels are deployed between CSG1 and ASBR3. The tunnels are orchestrated using intra-AS SIDs and inter-AS BGP SIDs. If a BGP EPE link between ASBRs fails, CSG1, the ingress of the SR-MPLS TE tunnel, cannot detect the failure, and therefore a traffic black hole may occur.

Currently, two methods are available for you to solve the preceding problem:

  1. If a link between ASBR1 and ASBR3 fails, BGP EPE triggers SBFD for SR-MPLS TE tunnel in the local AS to go down. This enables the ingress of the SR-MPLS TE tunnel to rapidly detect the failure and switch traffic to another normal tunnel, such as tunnel 2.
  2. If a link between ASBR1 and ASBR3 fails, ASBR1 pops the BGP EPE label from the received SR packet and searches the IP routing table based on the destination address of the packet. In this way, the packet may be forwarded to ASBR3 through the IP link ASBR1-ASBR2-ASBR3. This method applies when no backup inter-AS SR-MPLS TE tunnels exist on the network.
  1. You can select either method as needed. Selecting both of the methods is not allowed.
  2. The preceding methods are also supported if all the links corresponding to a BGP Peer-Set SID fail.
Figure 1-2522 BGP EPE fault association
BGP-LS
Inter-AS E2E SR-MPLS TE tunnels can be established by manually specifying explicit paths, or they can be orchestrated by a controller. In a controller-based orchestration scenario, intra- and inter-AS SIDs are reported to the controller using BGP-LS. Inter-AS links must support TE link attribute configuration and reporting to the controller. This enables the controller to compute primary and backup paths based on link attributes. BGP EPE discovers network topology information and allocates SID information. BGP-LS then packages this information into the Link network layer reachability information (NLRI) field and reports it to the controller. Figure 1-2523 and Figure 1-2524 show Link NLRI formats.
Figure 1-2523 Link NLRI format for Peer-Node and Peer-Adj SIDs advertised by BGP-LS
Figure 1-2524 Link NLRI format for Peer-Set SIDs advertised by BGP-LS
Table 1-1036 Link NLRI fields

Field

Description

NLRI

Network layer reachability information. It consists of the following parts:
  • LocalDescriptor: local descriptor, which contains a local router ID, a local AS number, and a BGP-LS ID
  • RemoteDescriptor: remote descriptor, which contains a peer router ID and a peer AS number
  • LinkDescriptor: link descriptor, which contains addresses used by a BGP session

LinkAttribute

Link information. It is a part of the Link NLRI.
  • Peer-Node SID: Peer-Node SID TLV
  • Peer-Adj SID: Peer-Adj SID TLV
  • Peer-Set SID: Peer-Set SID TLV
  • Administrative Group: link management group attribute
  • Max Link BW: maximum link bandwidth
  • Max Reservable Link BW: maximum reservable link bandwidth
  • Unreserved BW: remaining link bandwidth
  • Shared Risk Link Group: SRLG
A Peer-Node SID TLV and a Peer-Adj SID TLV have the same format. Figure 1-2525 shows the format of the Peer-Node SID TLV and Peer-Adj SID TLV.
Figure 1-2525 Peer-Node SID TLV and Peer-Adj SID TLV format
Table 1-1037 Peer-Node SID TLV and Peer-Adj SID TLV fields

Field

Length

Description

Type

16 bits

TLV type.

Length

16 bits

Packet length.

Flags

8 bits

Flags field used in a Peer-Adj SID TLV. Figure 1-2526 shows the format of this field.
Figure 1-2526 Flags field
In this field:
  • V: value flag. If this flag is set, an Adj-SID carries a label value. By default, the flag is set.
  • L: local flag. If this flag is set, the value or index carried by the Adj-SID has local significance. By default, the flag is set.

Weight

8 bits

Weight of the Peer-Adj SID, used for load balancing purposes.

Reserved

16 bits

Reserved field.

SID/Label/Index (variable)

Variable length

This field contains either of the following information based on the V and L flags:
  • A 3-octet local label where the 20 rightmost bits are used for encoding the label value. In this case, the V and L flags must be set.
  • A 4-octet index defining the offset in the Segment Routing global block (SRGB).

SR-MPLS BE

A Segment Routing (SR) label switched path (LSP) is a label forwarding path that is established using SR and guides data packet forwarding through a prefix or node SID. Segment Routing-MPLS best effort (SR-MPLS BE) refers to the mode in which an IGP runs the shortest path first (SPF) algorithm to compute an optimal SR LSP.

The establishment and data forwarding of SR LSPs are similar with those of LDP LSPs. SR LSPs have no tunnel interfaces.

SR LSP Creation

SR LSP creation involves the following operations:

  • Devices report topology information to a controller (if the controller is used for LSP creation) and are allocated with labels.

  • The devices compute paths.

SR LSPs are created primarily based on prefix labels. Specifically, the destination node runs an IGP to advertise a prefix SID. After receiving a packet carrying the SID, forwarders parse the packet to obtain the SID and compute label values based on their own SRGBs. Then, based on the IGP-collected topology information, each node runs the SPF algorithm to compute a label forwarding path, and delivers the computed next hop and OuterLabel information to the forwarding table to guide data packet forwarding.
Figure 1-2527 Prefix label-based LSP creation

Table 1-1038 describes the process of prefix label-based LSP creation on the network shown in Figure 1-2527.

Table 1-1038 LSP creation process

Step

Device

Operation

1

D

An SRGB is configured on device D and a prefix SID is configured on the loopback interface of device D for forwarding entry generation and delivery. Device D then encapsulates the SRGB and prefix SID into a Link State packet (LSP), for example, IS-IS Router Capability TLV-242 that contains the SR-Capabilities sub-TLV, and floods the LSP across the network through an IGP.

After receiving the LSP, other devices on the network parse the LSP to obtain the prefix SID advertised by device D, and calculate label values based on their own SRGBs as well as OuterLabel values based on the SRGBs advertised by the next hops. They then, based on IGP-collected topology information, calculate label forwarding paths and generate forwarding entries accordingly.

2

C

Device C parses the LSP to obtain the prefix SID advertised by device D and calculates a label value based on its local SRGB [36000–65535] through the following formula: Label = Start value of the SRGB + Prefix SID value. Here, the start value of the SRGB is 36000, and the prefix SID value is 100. Therefore, the label value is 36100 (36000 + 100).

Based on IS-IS topology information, device C calculates the OuterLabel value through the following formula: OuterLabel = Start value of the SRGB advertised by the next hop + Prefix SID value. Here, the next hop is device D, which advertises the SRGB [16000–65535]. Therefore, the OuterLabel value is 16100 (16000 + 100).

3

B

The calculation process on device B is similar to that on device C. In this example, the label value is 26100 (26000 + 100), and the OuterLabel value is 36100 (36000 + 100).

4

A

The calculation process on device A is also similar to that on device C. In this example, the label value is 20100 (20000 + 100), and the OuterLabel value is 26100 (26000 + 100).

SR LSP Creation Across IGP Areas

The following uses IS-IS as an example to describe how to create an SR LSP across IGP areas. IS-IS packets can be flooded only within an area. Therefore, to create an SR LSP across IGP areas on the network shown in Figure 1-2528, inter-area prefix SID advertisement is required.

Figure 1-2528 SR LSP creation across IGP areas

DeviceA and DeviceD are deployed in different areas, all devices run IS-IS, and SR is deployed. It is required that an SR LSP be established from DeviceA to DeviceD.

An SRGB is configured on DeviceD, and a prefix SID is configured on DeviceD's loopback interface. DeviceD generates and delivers forwarding entries, encapsulates the SRGB and prefix SID into an LSP (for example, IS-IS Router Capability TLV-242 that contains the SR-Capabilities sub-TLV), and floods the LSP across the network. After receiving the LSP, DeviceC parses the LSP to obtain the prefix SID, calculates and delivers forwarding entries, and leaks the prefix SID and prefix to the Level-2 area. Similarly, DeviceB parses the LSP to obtain the prefix SID, calculates and delivers forwarding entries, and leaks the prefix SID and prefix to the Level-1 area. DeviceA also parses the LSP to obtain the prefix SID, uses IS-IS topology information and the Dijkstra algorithm to compute an LSP, and generates an LSP forwarding entry.

Data Forwarding

Similar to MPLS, SR involves three types of label operations: push, swap, and pop.

  • Push: When a packet enters an SR LSP, the ingress adds a label between the Layer 2 and IP headers of the packet or adds a new label on top of the existing label stack.

  • Swap: When a packet is forwarded within the SR domain, the corresponding node uses the label allocated by the next hop to replace the top label according to the label forwarding table.

  • Pop: When a packet leaves the SR domain, the corresponding node searches for the outbound interface according to the top label in the packet and then removes the top label.

Figure 1-2529 Prefix label-based data forwarding

Table 1-1039 describes the process of data forwarding on the network shown in Figure 1-2529.

Table 1-1039 Data forwarding process

Step

Device

Operation

1

A

After receiving a data packet, node A adds label 26100 to the packet and then forwards the packet.

2

B

After receiving the labeled packet, node B swaps label 26100 with label 36100 and then forwards the packet.

3

C

After receiving the labeled packet, node C swaps label 36100 with label 16100 and then forwards the packet.

4

D

Node D removes label 16100 and then forwards the packet along the corresponding route.

PHP, MPLS QoS, and TTL

Because the outermost label in a packet is useless for the egress, penultimate hop popping (PHP) can be enabled to remove the label on the penultimate node, thereby relieving the burden on the egress. After receiving the packet, the egress directly forwards it over IP or based on the next label.

PHP is configured on the egress. On the network shown in Figure 1-2529, PHP is not enabled. Therefore, although node C is the penultimate hop of the involved SR tunnel, the packet sent from node C to node D still carries an SR label that is used to reach node D. However, if PHP is enabled, the packet sent from node C to node D does not carry any SR label.

Enabling PHP affects both the MPLS QoS and TTL functions on the egress. For details, see Table 1-1040.

Table 1-1040 PHP, MPLS QoS, and TTL

Label Type

Description

MPLS EXP (QoS) on the Egress

MPLS TTL on the Egress

Remarks

Explicit-null label

PHP is not supported, and the egress assigns an explicit-null label to the penultimate hop. In an IPv4 scenario, the explicit-null label value is 0.

The MPLS EXP field is reserved, so that QoS is supported.

MPLS TTL processing is normal.

Label resources on the egress are saved. If E2E services carry QoS attributes to be carried in the EXP field in a label, an explicit-null label can be used.

Implicit-null label

PHP is supported, and the egress assigns an implicit-null label to the penultimate hop. The implicit-null label value is 3.

If an implicit-null label is assigned to a node, the node pops the label in the received packet, instead of replacing the top label with this label. After receiving the packet, the egress directly forwards it over IP or based on the next label.

There is no MPLS EXP field in the packet received by the egress, so QoS is not supported.

There is no MPLS TTL field in the packet received by the egress, so copying the MPLS TTL value to the IP TTL field is not supported.

The forwarding burden on the egress is reduced, and forwarding efficiency is improved.

Non-null label

PHP is not supported, and the egress assigns a common label to the penultimate hop.

The MPLS EXP field is reserved, so that QoS is supported.

MPLS TTL processing is normal.

Using a non-null label consumes a large number of resources on the egress and is therefore not recommended. This type of label can be used to differentiate services by label on the egress.

SR-MPLS BE and LDP Communication

As an emerging technology, SR-MPLS has received more and more attention. SR-MPLS is introduced to simplify network deployment and management and reduce capital expenditure (CAPEX) and operating expenditure (OPEX). MPLS LDP is a mainstream tunneling technique that is widely used on bearer networks. Before LDP is completely replaced by SR-MPLS BE, LDP and SR-MPLS BE will coexist for a long time. Interworking between an LDP network and an SR-MPLS BE network, therefore, becomes a problem that must be addressed.

The SR-MPLS BE and LDP interworking technique allows both segment routing and LDP to work within the same network. This technique connects an SR-MPLS BE network to an LDP network to implement MPLS forwarding.

To implement the interworking between the LDP and SR-MPLS BE networks, the SR-MPLS BE network must have devices that replace SR-incapable LDP devices to advertise SIDs. Such devices are mapping servers.
  • Mapping server: supports mapping between prefixes and SIDs and advertises the mapping to a mapping client.
  • Mapping client: receives mapping between prefixes and SIDs sent by the mapping server and creates mapping entries.

Since LSPs are unidirectional, SR-MPLS BE and LDP interworking involves two directions: SR to LDP and LDP to SR.

SR-MPLS to LDP
Figure 1-2530 describes the process of creating an E2E SR-MPLS-to-LDP LSP.
Figure 1-2530 Process of creating an E2E SR-MPLS-to-LDP LSP
  1. The process of creating an E2E SR-MPLS-to-LDP LSP is as follows: On PE2, an IP address prefix is configured. LDP assigns a label to the prefix. PE2 sends a Label Mapping message upstream to P3.
  2. Upon receipt of the message, P3 assigns a label to the prefix and sends a Label Mapping message upstream to P2.
  3. Upon receipt of the message, P2 creates an LDP LSP to PE2.
  4. On P2, the mapping server function is enabled so that P2 maps an LDP label carried in the IP address prefix to a SID.
  5. P2 advertises a Mapping TLV upstream to P1.
  6. P1 advertises a Mapping TLV upstream to PE1.
  7. PE1 parses the Mapping TLV and creates an SR LSP to P2.
  8. P2 creates mapping between the SR and LDP LSPs.

During data forwarding, P2 has no SR-MPLS label destined for PE2 and encapsulates an SR-MPLS label to an LDP label based on the mapping between the prefix and SID.

LDP to SR-MPLS
Figure 1-2531 describes the process of creating an E2E LDP-to-SR-MPLS LSP.
Figure 1-2531 Process of creating an E2E LDP-to-SR-MPLS LSP
  1. The process of creating an E2E LDP-to-SR-MPLS LSP is as follows: An IP address prefix is assigned to PE1 and a SID is set for the prefix. PE1 advertises the prefix and SID to P1 using an IGP.
  2. Upon receipt of the information, P1 advertises the prefix and SID to P2 using an IGP.
  3. Upon receipt of the prefix and SID, P2 creates an SR LSP to PE1.
  4. On P2, proxy LDP egress is configured P2 maps a SID carried in the IP address prefix to an LDP label. Once proxy LDP egress is configured and the route to the peer is reachable, a local node sends a Label Mapping message upstream.
  5. P2 sends a Label Mapping message upstream to P3, and P3 sends a Label Mapping message upstream to PE2.
  6. PE2 parses the received Label Mapping message and creates an LDP LSP to P2.
  7. P2 creates mapping between the SR and LDP LSPs.

During data forwarding, P2 has no LDP label destined for PE1 and encapsulates an LDP label to an SR-MPLS label based on the mapping between the prefix and SID.

SR-MPLS BE over RSVP-TE

Fundamentals

RSVP-TE is an MPLS tunneling technique used to generate LSPs through which data packets can be transparently transmitted. In contrast, SR-MPLS BE is an MPLS tunneling technology used to generate SR LSPs. SR-MPLS BE over RSVP-TE enables an SR LSP to traverse an RSVP-TE area, so that an RSVP-TE tunnel functions as a hop of the SR LSP.

After an RSVP-TE tunnel is established, IGP (such as IS-IS) performs local computation or advertises LSAs to select a TE tunnel interface as the outbound interface. The source router can be considered as being directly connected to the destination router of the RSVP-TE tunnel through RSVP-TE tunnel interfaces (logical interfaces), and packets are transparently transmitted through the RSVP-TE tunnel.

On the network shown in Figure 1-2532, P1, P2, and P3 all belong to an RSVP-TE area. PE1 and PE2 are PEs in the same VPN. SR-MPLS runs between PE1 and P1 and between P3 and PE2. This example uses the establishment of an SR LSP from PE1 to PE2 to describe how to establish an SR LSP over RSVP-TE.
  1. An RSVP-TE tunnel from P1 to P3 is established. P3 assigns RSVP-Label-1 to P2, and P2 assigns RSVP-Label-2 to P1.

  2. SR LSP establishment is triggered on PE2. In this case, PE2 uses IGP to send a specified prefix SID and its own SRGB to P3, which then calculates incoming and outgoing labels.

  3. P3 uses IGP to send the prefix SID and its own SRGB to P1, which then calculates incoming and outgoing labels.

  4. P1 uses IGP to send the prefix SID and its own SRGB to PE1, which then calculates the outgoing label. Finally, the SR LSP from PE1 to PE2 is established.

Figure 1-2532 SR-MPLS BE over RSVP-TE label allocation
Usage Scenario

SR-MPLS BE over RSVP-TE is typically used for VPN services. On the network shown in Figure 1-2533, deploying TE on the entire network in order to implement MPLS traffic engineering would be a difficult task for carriers. To address this issue, carriers can plan a core area for TE deployment, and deploy SR-MPLS BE outside this area.

Figure 1-2533 SR-MPLS BE over RSVP-TE networking topology

SR LSP deployment outperforms RSVP-TE tunnel deployment in terms of operations and maintenance, and using SR-MPLS BE consumes fewer system resources than deploying soft-state RSVP. In addition to these advantages, SR-MPLS BE over RSVP-TE requires RSVP-TE tunnels to be deployed only in the core area, eliminating the need to establish full-mesh RSVP-TE tunnels between edge PEs. This simplifies network deployment and maintenance and reduces the pressure on PEs. Furthermore, the advantages of RSVP-TE tunnels in protection switching, path planning, bandwidth protection, and other aspects can be fully utilized in the core area of the network.

SR-MPLS Flex-Algo

Background of SR-MPLS Flex-Algo

Traditionally, IGPs can use only the SPF algorithm to calculate the shortest paths to destination addresses based on link costs. As the SPF algorithm is fixed and cannot be adjusted by users, the optimal paths cannot be calculated according to users' diverse requirements, such as the requirement for traffic forwarding along the lowest-delay path or without passing through certain links.

On a network, constraints used for path calculation may be different. For example, as autonomous driving requires an ultra-low delay, an IGP needs to use delay as the constraint to calculate paths on such a network. Another constraint that needs to be considered is cost, so some links with high costs need to be excluded in path calculation. These constraints may also be combined.

To make path calculation more flexible, users may want to customize IGP route calculation algorithms to meet their varying requirements. They can define an algorithm value to identify a fixed algorithm. When all devices on a network use the same algorithm, their calculation results are also the same, preventing loops. Since users, not standards organizations, are the ones to define these algorithms, they are called Flex-Algos.

With Flex-Algo, an IGP can automatically calculate eligible paths based on the link cost, delay, or TE constraint to flexibly meet TE requirements. This means that when SR-MPLS uses an IGP to calculate paths, prefix SIDs can be associated with Flex-Algos to calculate SR-MPLS BE paths that meet different requirements.

SR-MPLS Flex-Algo Advertisement

Each device can use an IGP to advertise their supported Flex-Algos and the related calculation rules through the Flex-Algo Definition (FAD) sub-TLV.

These Flex-Algos can also be associated with prefix SIDs during prefix SID configuration. The IGP then advertises the Flex-Algos and prefix SIDs through the Prefix-SID Sub-TLV.

The Flex-Algo value occupies 8 bits in the FAD Sub-TLV and Prefix-SID Sub-TLV. Values 128 to 255 are reserved for users to customize the Flex-Algos represented by these values.

To ensure that the forwarding path calculated by an IGP is loop free, the same Flexible Algorithm Definition (FAD) must be used in an IGP domain.

SR-MPLS Flex-Algo Advertisement by IS-IS

IS-IS advertises its FAD through the IS-IS FAD Sub-TLV.

IS-IS FAD Sub-TLV

IS-IS uses the IS-IS Router Capability TLV-242 to carry the IS-IS FAD Sub-TLV and advertises the sub-TLV to neighbors. Figure 1-2534 shows the format of the IS-IS FAD Sub-TLV.

Figure 1-2534 Format of the IS-IS FAD Sub-TLV

Table 1-1041 describes the fields in the IS-IS FAD Sub-TLV.

Table 1-1041 Fields in the IS-IS FAD Sub-TLV

Field Name

Length

Description

Type

8 bits

Type of the sub-TLV.

Length

8 bits

Total length of the sub-TLV (excluding the Type and Length fields).

Flex-Algo

8 bits

Flex-Algo ID, which is an integer ranging from 128 to 255.

Metric-Type

8 bits

Metric type used during calculation, which can be IGP metric, minimum one-way link delay, or TE metric.

Calc-Type

8 bits

Calculation type, which currently can only be SPF and does not need setting.

Priority

8 bits

Priority of the sub-TLV.

Sub-TLVs

Variable length

Optional sub-TLVs, which can define some constraints.

A Flex-Algo is defined by users and generally represented by a 3-tuple: Metric-Type, Calc-Type, and Constraints. Metric-Type and Calc-Type have been described in Table 1-1041. Constraints include the following:
  • Exclude Admin Group: A link is excluded from path calculation if any bit in the link's administrative group has the same name as an affinity bit referenced by the administrative group.
  • Include-Any Admin Group: A link is included in path calculation as long as one bit in the link's administrative group has the same name as an affinity bit referenced by the administrative group.
  • Include-All Admin Group: A link is included in path calculation only if all bits in the link's administrative group have the same names as the affinity bits referenced by the administrative group.

These constraints are described by the Sub-TLVs field in the FAD Sub-TLV. The Exclude Admin Group Sub-TLV, Include-Any Admin Group Sub-TLV, and Include-All Admin Group Sub-TLV share the same format, as shown in Figure 1-2535.

Figure 1-2535 Format of the Exclude/Include-Any/Include-All Admin Group Sub-TLV
In the same IGP domain, different devices may define Flex-Algos that have the same value but different meanings. When FADs are different, the devices select a FAD according to the following rules:
  • The FAD with the highest priority is preferentially selected.
  • If the priorities of the FADs advertised by devices are the same, the FAD advertised by the device with the largest system ID is selected.

SR-Algorithm Sub-TLV

A node may use different algorithms to calculate reachability to other nodes or to prefixes attached to these nodes. Examples of these algorithms are SPF and various SPF variant algorithms. The SR-Algorithm Sub-TLV allows the node to advertise the algorithms that the node is currently using. It is also inserted into the IS-IS Router Capability TLV-242 for transmission. The SR-Algorithm Sub-TLV can be propagated only within the same IS-IS level and must not be propagated across IS-IS levels.

Figure 1-2536 shows the format of the SR-Algorithm Sub-TLV.
Figure 1-2536 Format of the SR-Algorithm Sub-TLV carried in IS-IS packets
Table 1-1042 Fields in the SR-Algorithm Sub-TLV carried in IS-IS packets

Field Name

Length

Description

Type

8 bits

Unassigned. The recommended value is 2.

Length

8 bits

Packet length.

Algorithm

8 bits

Algorithm that is used.

SR-MPLS Flex-Algo Advertisement by OSPF

OSPF advertises its FAD through the OSPF FAD TLV.

OSPF FAD TLV

OSPF uses an RI LSA to carry the OSPF FAD TLV and advertises the TLV to neighbors. Figure 1-2537 shows the format of the OSPF FAD TLV.

Figure 1-2537 Format of the OSPF FAD TLV

Table 1-1043 describes the fields in the OSPF FAD TLV.

Table 1-1043 Fields in the OSPF FAD TLV

Field Name

Length

Description

Type

16 bits

Type of the TLV. The value is 16.

Length

16 bits

Total length of the TLV (excluding the Type and Length fields).

Flex-Algo

8 bits

Flex-Algo ID, which is an integer ranging from 128 to 255.

Metric-Type

8 bits

Metric type used during calculation, which can be IGP metric, link delay, or TE metric.

Calc-Type

8 bits

Calculation type, which currently can only be SPF and does not need setting.

Priority

8 bits

Priority of the TLV.

Sub-TLVs

Variable length

Optional sub-TLVs, which can define some constraints.

In OSPF, a Flex-Algo is also defined by users and represented by a 3-tuple: Metric-Type, Calc-Type, and Constraints. For details, see the part about IS-IS.

SR-Algorithm Sub-TLV

A node may use different algorithms to calculate reachability to other nodes or to prefixes attached to these nodes. Examples of these algorithms are SPF and various SPF variant algorithms. The SR-Algorithm Sub-TLV allows the node to advertise the algorithms that the node is currently using. It is also transmitted through an RI LSA. Figure 1-2538 shows the format of the SR-Algorithm Sub-TLV.

Figure 1-2538 Format of the SR-Algorithm Sub-TLV carried in OSPF packets

Table 1-1044 describes the fields in the SR-Algorithm Sub-TLV.

Table 1-1044 Fields in the SR-Algorithm Sub-TLV carried in OSPF packets

Field Name

Length

Description

Type

16 bits

Sub-TLV type value.

Length

16 bits

Packet length.

Algorithm

8 bits

Algorithm that is used.

Application-Specific Link Attributes (ASLA) Sub-TLV

The Application-Specific Link Attributes (ASLA) Sub-TLV describes link attributes of a specific application. OSPFv2 uses OSPFv2 Extended Link Opaque LSAs to carry the Application-Specific Link Attributes (ASLA) Sub-TLV of links and advertises it to neighbors. Figure 1-2539 shows the format of the Application-Specific Link Attributes (ASLA) Sub-TLV in an OSPFv2 Flex-Algo scenario.

Figure 1-2539 Format of the Application-Specific Link Attributes (ASLA) Sub-TLV

Table 1-1045 describes the fields in the Application-Specific Link Attributes (ASLA) Sub-TLV.

Table 1-1045 Fields in the Application-Specific Link Attributes (ASLA) Sub-TLV

Field Name

Length

Description

Type

16 bits

TLV type value

Length

16 bits

Packet length

SABM Length

4 bits

Standard application identifier bit mask length

UDABM Length

4 bits

User-defined application identifier bit mask length

Reserved

8 bits

Reserved field

Standard Application Identifier Bit Mask

Variable length

Standard application identifier bit mask

User-Defined Application Identifier Bit Mask

Variable length

User-defined application identifier bit mask

Link Attribute sub-sub-TLVs

Variable length

Specific link attributes, including:

  • Min/Max Unidirectional Link Delay
  • TE Metric
  • Extended Administrative Group
  • Administrative Group

SR-MPLS Flex-Algo Inter-area Route Advertisement

As shown in Figure 1-2540, a domain is divided into two areas for OSPF.

  • Nodes 5, 6, and 7 belong to area 1.
  • Nodes 3 and 4 are ABRs and belong to both area 0 and area 1.
  • Nodes 0, 1, and 2 belong to area 0.

On the nodes in area 0 and area 1, Flex-Algo 128 is configured, with metric-type set to the minimum link delay.

Figure 1-2540 SR-MPLS Flex-Algo inter-area route advertisement

OSPF SR-MPLS Flex-Algo inter-area route advertisement is the default behavior.

As ABRs, node 3 and node 4 advertise the Prefix-SID and metric of node 7's Flex-Algo to area 0 and flood them throughout area 0. The metric is carried in the OSPF Flexible Algorithm Prefix Metric (FAPM) Sub-TLV. Figure 1-2541 shows the format of the OSPF FAPM Sub-TLV.

Figure 1-2541 Format of the OSPF FAPM Sub-TLV

Table 1-1046 describes the fields in the OSPF FAPM Sub-TLV.

Table 1-1046 Fields in the OSPF FAPM Sub-TLV

Field Name

Length

Description

Type

16 bits

Sub-TLV type value

Length

16 bits

Packet length

Flex-Algo

8 bits

Flexible algorithm

Flags

8 bits

Flags field

Reserved

16 bits

Reserved field

Metric

32 bits

Metric value

After node 0 in area 0 collects the topology information, it uses the FAD to calculate the Flex-Algo Prefix-SID route advertised from area 0 to node 7. The metric calculation method is similar to the OSPFv2 default algorithm. The Flex-Algo metric from the ABR to the inter-area Prefix-SID is the metric in the FAPM Sub-TLV.

SR-MPLS Flex-Algo Route Import

As shown in Figure 1-2542, a domain is divided into two areas for OSPF process 1.

  • Nodes 0, 1, and 2 belong to area 1 of OSPF process 1. Nodes 3 and 4 belong to area 0 of OSPF process 1. Nodes 1 and 2 are ABRs.
  • Nodes 3, 4, 5, 6, and 7 belong to area 0 of OSPF process 2. Nodes 3 and 4 are ASBRs.
Figure 1-2542 SR-MPLS Flex-Algo route import

An OSPF process can import Flex-Algo Prefix-SID routes from another OSPF process or from an IS-IS process.

As an ASBR, node 3 advertises the Prefix-SID and metric of node 7's Flex-Algo to OSPF process 1 and flood them throughout the AS.

As an ABR, node 1 advertises the ABR-to-ASBR Flex-Algo metric to area 1 of OSPF process 1 and floods it throughout area 1.

The specific bearing relationship is as follows:

  • OSPFv2 Extended Inter-Area ASBR LSA
    • OSPFv2 Extended Inter-Area ASBR TLV
      • OSPF Flexible Algorithm ASBR Metric Sub-TLV

After node 0 in area 0 collects the topology information, it uses the FAD to calculate the Flex-Algo Prefix-SID route advertised from area 0 to node 7. The metric calculation method is similar to the OSPFv2 default algorithm. The Flex-Algo metric from the ABR to the ASBR is the Metric in the FAPM Sub-TLV, and the Flex-Algo metric from the ASBR to the remote Prefix-SID is the metric in the FAPM Sub-TLV.

OSPFv2 Extended Inter-Area ASBR LSA

Figure 1-2543 shows the format of the OSPFv2 Extended Inter-Area ASBR LSA.

Figure 1-2543 Format of the OSPFv2 Extended Inter-Area ASBR LSA
Table 1-1047 Fields in the OSPFv2 Extended Inter-Area ASBR LSA

Field Name

Length

Description

Opaque Type

8 bits

Opaque type. The type value of the OSPFv2 Extended Inter-Area ASBR LSA is 11.

Opaque ID

24 bits

Opaque ID.

TLVs

Variable length

TLVs that can be carried.

All OSPF LSAs have the same header. For details about other fields, see OSPF LSA Format.

OSPFv2 Extended Inter-Area ASBR TLV

Figure 1-2544 shows the format of the OSPFv2 Extended Inter-Area ASBR TLV.

Figure 1-2544 Format of the OSPFv2 Extended Inter-Area ASBR TLV

Table 1-1048 describes the fields in the OSPFv2 Extended Inter-Area ASBR TLV.

Table 1-1048 Fields in the OSPFv2 Extended Inter-Area ASBR TLV

Field Name

Length

Description

Type

16 bits

TLV type value

Length

16 bits

Packet length

ASBR Router ID

32 bits

Router ID of the ASBR

Sub-TLVs

Variable length

Sub-TLVs that can be carried

OSPF Flexible Algorithm ASBR Metric Sub-TLV

Figure 1-2545 shows the format of the OSPF Flexible Algorithm ASBR Metric Sub-TLV.

Figure 1-2545 Format of the OSPF Flexible Algorithm ASBR Metric Sub-TLV

Table 1-1049 describes the fields in the OSPF Flexible Algorithm ASBR Metric Sub-TLV.

Table 1-1049 Fields in the OSPF Flexible Algorithm ASBR Metric Sub-TLV

Field Name

Length

Description

Type

16 bits

Sub-TLV type value

Length

16 bits

Packet length

Flex-Algo

8 bits

Flexible algorithm

Reserved

24 bits

Reserved field

Metric

32 bits

Metric value

SR-MPLS Flex-Algo Implementation

Figure 1-2546 is used as an example to describe how SR-MPLS Flex-Algo is implemented.

A device using Flex-Algos for path calculation must advertise its Flex-Algos to other devices.

  • PE1 and PE2 use both Flex-Algos 128 and 129, which are advertised to all the other nodes on the network through an IGP's SR-Algorithm sub-TLVs.
  • P1 to P4 use Flex-Algo 128, which is also advertised through the IGP.
  • P5 to P8 use Flex-Algo 129, which is also advertised through the IGP.

Besides Flex-Algos 128 and 129, all devices support the most basic IGP cost-based algorithm, which has value 0. This algorithm is not advertised through the SR-Algorithm Sub-TLV.

Figure 1-2546 SR-MPLS Flex-Algo implementation

After a prefix SID is associated with a Flex-Algo on PE2, other devices on the network calculate a route to the prefix SID based on the Flex-Algo. As this configuration method repeats, different prefix SID routes that represent different paths calculated using different Flex-Algos can be generated, meeting diverse service requirements.

For example, in Figure 1-2546, Flex-Algo 128 is defined as follows: calculate a path based on the IGP cost and link affinity, with the constraint being Exclude Admin Group (excluding the links between P2 and P3 as well as between P2 and P4). Figure 1-2547 shows the possible path obtained on PE1 with this Flex-Algo.

Figure 1-2547 PE1 calculating a path to PE2 based on Flex-Algo 128

BGP-LS Extensions for SR-MPLS Flex-Algo

In scenarios where a controller dynamically orchestrates an SR-MPLS TE Policy based on Flex-Algo, forwarders need to report information such as node attributes, link attributes, and Prefix-SIDs to the controller through BGP-LS. BGP-LS information is carried in Node NLRI/Link NLRI information and advertised to the controller.

BGP-LS is extended to implement the preceding functions, as described in Table 1-1050.

Table 1-1050 BGP-LS extensions for SR-MPLS Flex-Algo

Attribute Category

TLV

Sub-TLV

Node attributes

FAD

Flex-Algo Exclude Any Affinity

Flex-Algo Include Any Affinity

Flex-Algo Include All Affinity

Flex-Algo Definition Flags

SR-Algorithm TLV

None

Link attributes

Application Specific Link Attributes TLV

TE Metric

Min/Max Unidirectional Link Delay

Extended Administrative Group (color)

Administrative Group (color)

Unidirectional Link Loss TLV

Link SID TLV

None

Prefix attributes

Prefix SID TLV

None

Flexible Algorithm Prefix Metric TLV

None

FAD TLV

The FAD TLV is carried in the Node NLRI. Figure 1-2548 shows the format of the FAD TLV.

Figure 1-2548 FAD TLV format

Figure 1-2548 describes the fields in the FAD TLV.

Table 1-1051 Fields in the FAD TLV

Field

Length

Description

Type

16 bits

Type of the TLV.

Length

16 bits

Total length of the TLV (excluding the Type and Length fields).

Flex-Algo

8 bits

Flex-Algo ID, which is an integer ranging from 128 to 255.

Metric-Type

8 bits

Metric type used during calculation.

This field can be set to an IGP metric, minimum unidirectional link delay, or TE metric value.

Calc-Type

8 bits

Calculation type, which can currently only be SPF and does not need setting.

Priority

8 bits

Priority of the TLV.

Sub-TLVs

Variable

Optional sub-TLVs, which can define some constraints.

The sub-TLVs of the FAD TLV include Flex-Algo Exclude Any Affinity, Flex-Algo Include Any Affinity, Flex-Algo Include All Affinity, and Flex-Algo Definition Flags. Figure 1-2549 shows the sub-TLV format.

Figure 1-2549 Affinity sub-TLV format

Figure 1-2550 shows the format of the Flex-Algo Definition Flags sub-TLV.

Figure 1-2550 Format of the Flex-Algo Definition Flags sub-TLV
SR-Algorithm TLV
The SR-Algorithm TLV is used to advertise the Flex-Algos supported by a node. Figure 1-2551 shows the format of the SR-Algorithm TLV.
Figure 1-2551 SR-Algorithm TLV format

Table 1-1052 describes the fields in the SR-Algorithm TLV.

Table 1-1052 Fields in the SR-Algorithm TLV

Field

Length

Description

Type

16 bits

Type of the TLV.

Length

16 bits

Total length of the TLV (excluding the Type and Length fields).

Algorithm

8 bits

Algorithm that is used.

Application Specific Link Attributes TLV

Figure 1-2552 shows the format of the Application Specific Link Attributes TLV.

Figure 1-2552 Format of the Application Specific Link Attributes TLV

Table 1-1053 describes the fields in the Application Specific Link Attributes TLV.

Table 1-1053 Fields in the Application Specific Link Attributes TLV

Field

Length

Description

Type

16 bits

Type of the TLV.

Length

16 bits

Total length of the TLV (excluding the Type and Length fields).

SABML

8 bits

Standard application identifier bit mask length, in octets. The value must be 0, 4, or 8. If the Standard Application Identifier Bit Mask field does not exist, the SABML field must be set to 0.

UDABML

8 bits

User-defined application identifier bit mask length, in octets. The value must be 0, 4, or 8. If the User-Defined Application Identifier Bit Mask field does not exist, the UDABML field must be set to 0.

Standard Application Identifier Bit Mask

Variable

Standard application identifier bit mask, where each bit represents a standard application.

User-Defined Application Identifier Bit Mask

Variable

User-defined application identifier bit mask, where each bit represents a user-defined application.

Link Attribute Sub-TLVs

Variable

Sub-TLVs contained in the TLV.

The Application Specific Link Attributes TLV carries sub-TLVs such as TE Metric, Min/Max Unidirectional Link Delay, Extended Administrative Group (color), Administrative Group (color), and Unidirectional Link Loss.

Figure 1-2553 shows the format of the TE Metric sub-TLV.

Figure 1-2553 Format of the TE Metric sub-TLV

Table 1-1054 describes the fields in the TE Metric sub-TLV.

Table 1-1054 Fields in the TE Metric sub-TLV

Field

Length

Description

Type

16 bits

Type of the sub-TLV.

Length

16 bits

Total length of the sub-TLV (excluding the Type and Length fields).

TE Default Link Metric

32 bits

Default TE link metric.

Figure 1-2554 shows the format of the Min/Max Unidirectional Link Delay sub-TLV.

Figure 1-2554 Format of the Min/Max Unidirectional Link Delay sub-TLV

Table 1-1055 describes the fields in the Min/Max Unidirectional Link Delay sub-TLV.

Table 1-1055 Fields in the Min/Max Unidirectional Link Delay sub-TLV

Field

Length

Description

Type

16 bits

Type of the sub-TLV.

Length

16 bits

Total length of the sub-TLV (excluding the Type and Length fields).

A

1 bit

Anomalous (A) bit. It is set when the delay value exceeds 16777215.

Min Delay

24 bits

Minimum delay.

Max Delay

24 bits

Maximum delay.

Figure 1-2555 shows the format of the Extended Administrative Group (color) sub-TLV.

Figure 1-2555 Format of the Extended Administrative Group (color) sub-TLV

Table 1-1056 describes the fields in the Extended Administrative Group (color) sub-TLV.

Table 1-1056 Fields in the Extended Administrative Group (color) sub-TLV

Field

Length

Description

Type

16 bits

Type of the sub-TLV.

Length

16 bits

Total length of the sub-TLV (excluding the Type and Length fields).

Extended Administrative Groups

Variable

Extended administrative groups.

Figure 1-2556 shows the format of the Administrative Group (color) sub-TLV.

Figure 1-2556 Format of the Administrative Group (color) sub-TLV

Table 1-1057 describes the fields in the Administrative Group (color) sub-TLV.

Table 1-1057 Fields in the Administrative Group (color) sub-TLV

Field

Length

Description

Type

16 bits

Type of the sub-TLV.

Length

16 bits

Total length of the sub-TLV (excluding the Type and Length fields).

Administrative Group

32 bits

Administrative group.

Figure 1-2557 shows the format of the Unidirectional Link Loss sub-TLV.

Figure 1-2557 Format of the Unidirectional Link Loss sub-TLV

Table 1-1058 describes the fields in the Unidirectional Link Loss sub-TLV.

Table 1-1058 Fields in the Unidirectional Link Loss sub-TLV

Field

Length

Description

Type

16 bits

Type of the sub-TLV.

Length

16 bits

Total length of the sub-TLV (excluding the Type and Length fields).

A

1 bit

Anomalous (A) bit. It is set when the packet loss rate exceeds 16777215.

Link Loss

24 bits

Packet loss rate of a link.

Conflict Handling of Flex-Algo-Associated Prefix SIDs

Prefix SIDs are manually set and therefore may conflict on different devices. Prefix SID conflicts can occur in either the prefix or SID. A prefix conflict indicates that the same prefix with the same Flex-Algo is associated with different SIDs, whereas a SID conflict indicates that the same SID is associated with different prefixes.

If prefix SIDs conflict, handle prefix conflicts before SID conflicts by matching the following rules to preferentially select a route:

  1. The route with the largest prefix mask is preferred.
  2. The route with the smallest prefix is preferred.
  3. The route with the smallest SID is preferred.
  4. The route with the smallest algorithm value is preferred.

For example, there are prefix SID conflicts in the following seven routes expressed in the prefix/mask length SID Flex-Algo format:

  • a. 1.1.1.1/32 1 128
  • b. 1.1.1.1/32 2 128
  • c. 2.2.2.2/32 3 0
  • d. 3.3.3.3/32 1 0
  • e. 1.1.1.1/32 4 129
  • f. 1.1.1.1/32 1 0
  • g. 1.1.1.1/32 5 128

The process for handling the prefix SID conflicts is as follows:

  1. Prefix conflicts are handled first. Routes a, b, and g encounter a prefix conflict because they share the same prefix and algorithm but contain different SIDs. Route a has the smallest SID, and therefore is preferred. After routes b and g are excluded, the following routes are left:
    • a. 1.1.1.1/32 1 128
    • c. 2.2.2.2/32 3 0
    • d. 3.3.3.3/32 1 0
    • e. 1.1.1.1/32 4 129
    • f. 1.1.1.1/32 1 0
  2. SID conflicts are then handled. Routes a, d, and f encounter a SID conflict. Routes a and f have a smaller prefix than route d, and therefore a and f are preferred to d. As f has a smaller algorithm value than a, f is preferred. After the SID conflict is removed, the following three routes are left:
    • c. 2.2.2.2/32 3 0
    • e. 1.1.1.1/32 4 129
    • f. 1.1.1.1/32 1 0

Service Traffic Steering into an SR-MPLS BE Path Based on Flex-Algo

Figure 1-2558 shows how service traffic is steered into an SR-MPLS BE path based on Flex-Algo. The specific process is as follows:

  1. Configure PE1 and PE2 to use both Flex-Algos 128 and 129; configure P1 to P4 to use Flex-Algo 128, and P5 to P8 to use Flex-Algo 129.
  2. On PE2, configure the prefix SID with index 200 to use Flex-Algo 128. The prefix SID with index 100 uses the default algorithm 0, that is, the IGP cost-based SPF algorithm.
  3. Configure BGP on PE2 to advertise a VPNv4 route with prefix 10.1.1.0/24 and next hop 10.1.1.1. The extended community attribute color 100 is added to the route based on an export route-policy.
  4. Configure the mapping between the color and Flex-Algo on PE1. IGP uses algorithm 0 to calculate a common SR-MPLS BE path for the prefix SID with index 100 and uses Flex-Algo 128 to calculate a Flex-Algo-based SR-MPLS BE path for the prefix SID with index 200.
  5. Configure a tunnel policy on PE1 and apply the tunnel policy to the VPN instance.

    After the configuration is complete, the VPN route is recursed based on the tunnel policy. Specifically, if the tunnel policy specifies a preferential use of Flex-Algo-based SR-MPLS BE paths, the VPN route is recursed to the target Flex-Algo-based SR-MPLS BE path based on the next hop address and color attribute of the route; if the tunnel policy specifies a preferential use of common SR-MPLS tunnels, the VPN route is recursed to the target common SR-MPLS BE path based on the next hop address of the route.

Figure 1-2558 Service traffic steering into an SR-MPLS BE path based on Flex-Algo (IS-IS is used as an example)

SR-MPLS TE

Segment Routing MPLS traffic engineering (SR-MPLS TE) is a TE tunneling technology that uses SR as a control protocol. The controller calculates a path for an SR-MPLS TE tunnel and forwards a computed label stack to the ingress configured on a forwarder. The ingress uses the label stack to generate an LSP in the SR-MPLS TE tunnel. Therefore, the label stack is used to control the path along which packets are transmitted on a network.

SR-MPLS TE Advantages

SR-MPLS TE tunnels are capable of meeting the requirements for rapid development of software-defined networking (SDN), which Resource Reservation Protocol-TE (RSVP-TE) tunnels are unable to meet. Table 1-1059 describes the comparison between SR-MPLS TE and RSVP-TE.

Table 1-1059 Comparison between SR-MPLS TE and RSVP-TE tunnels

Item

SR-MPLS TE

RSVP-TE

Label allocation

Labels are allocated and propagated using IGP extensions. Each link is assigned only a single label, and all LSPs share the label, which reduces resource consumption and maintenance workload of label forwarding tables.

MPLS allocates and distributes labels. Each LSP is assigned a label, which consumes a great number of labels resources and results in heavy workloads maintaining label forwarding tables.

Control plane

IGP extensions are used for signaling control, without requiring any dedicated MPLS control protocol. This reduces the number of required protocols.

RSVP-TE needs to be used as the MPLS control protocol, complicating the control plane.

Scalability

Because transit nodes are unaware of tunnels and use packets to carry tunnel information, they only need to maintain forwarding entries instead of tunnel state information, enhancing scalability.

Tunnel state information and forwarding entries need to be maintained, resulting in poor scalability.

Path adjustment and control

A service path can be controlled by operating a label only on the ingress. Configurations do not need to be delivered to each node, which improves programmability.

When a node in the path fails, the controller recalculates the path and updates the label stack of the ingress node to complete the path adjustment.

Whether it is a normal service adjustment or a passive path adjustment of a fault scenario, the configurations must be delivered to each node.

Related Concepts

Label Stack

A label stack is an ordered set of labels used to identify a complete LSP. Each adjacency label in the label stack identifies a specific adjacency, and the entire label stack identifies all adjacencies along the LSP. During packet forwarding, a node searches for the corresponding adjacency according to each adjacency label in the label stack. The node then removes the label before forwarding the packet. After all the adjacency labels in the label stack are removed, the packet traverses the entire LSP and reaches the destination node of the involved SR-MPLS TE tunnel.

Stitching Label and Stitching Node

If a label stack exceeds the maximum depth supported by forwarders, the label stack is unable to carry the adjacency labels of the entire LSP. In this case, the controller needs to allocate multiple label stacks to the forwarders and a special label to an appropriate node to stitch these label stacks, thereby implementing segment-by-segment forwarding. The special label is called a stitching label, and the appropriate node is called a stitching node.

The controller allocates a stitching label to the stitching node and places the label at the bottom of the label stack. After a packet is forwarded to the stitching node, according to the association between the stitching label and label stack, the node replaces the stitching label with a new label stack to further guide forwarding.

Topology Collection and Label Allocation

Network Topology Collection

Network topology information is mainly collected through BGP-LS. Specifically, after a BGP-LS peer relationship is established between a controller and a forwarder, the forwarder collects network topology information through an IGP and then reports the information to the controller through BGP-LS.

Label Allocation

A forwarder allocates labels through an IGP and then reports them to a controller through BGP-LS. SR-MPLS TE mainly uses adjacency labels (adjacency segments), which are allocated by the ingress, and are valid locally and unidirectional. It can also use node labels, which are configured manually and valid globally. Both adjacency and node labels can be propagated through an IGP. Use the network shown in Figure 1-2559 as an example. Adjacency label 9003 identifies the PE1-to-P3 adjacency and is allocated by PE1. Adjacency label 9004 identifies the P3-to-PE1 adjacency and is allocated by P3.

Figure 1-2559 IGP-based label allocation

IGP SR is enabled on PE1, P1, P2, P3, P4, and PE2, and an IGP neighbor relationship is established between each pair of directly connected nodes. For SR-capable IGP instances, all IGP-enabled outbound interfaces are allocated with SR adjacency labels. These labels are flooded to the entire network through an IGP SR extension. Taking P3 on the network shown in Figure 1-2559 as an example, the process of label allocation through an IGP is as follows:

  1. P3 allocates a local dynamic label to an adjacency through an IGP. For example, P3 allocates the adjacency label 9002 to the P3-to-P4 adjacency.
  2. P3 propagates the adjacency label to the entire network through the IGP.
  3. P3 generates a label forwarding entry based on the adjacency label.
  4. Other nodes on the network learn the P3-propagated adjacency label through the IGP, but do not generate label forwarding entries.

PE1, P1, P2, P3, and P4 all allocate and propagate adjacency labels and generate label forwarding entries in the same way as P3 does. After establishing a BGP-LS peer relationship with the controller, the nodes report collected topology information (including SR labels) to the controller through BGP-LS.

SR-MPLS TE Tunnel Attributes

SR-MPLS TE tunnels can be established based on configured link resources. SR-MPLS TE uses the following items to define link resources:
  • Link status information: IGP-collected information, such as interface IP addresses, link types, and link costs.

  • Bandwidth information: includes the maximum physical link bandwidth, maximum reservable bandwidth, and available bandwidth corresponding to each priority.

  • TE metric: link's TE metric value, which is the same as the IGP metric by default.

  • Shared risk link group (SRLG): is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.
  • TE affinity attributes: include the tunnel affinity attribute and the interface administrative group attribute (AdminGroup). These attributes are specified by configuring 8-bit hexadecimal numbers. The interface administrative group attribute is advertised by IGP (IS-IS/OSPF) to other devices in the IGP domain as a TE link attribute. MPLS TE checks whether the affinity attributes meet requirements before establishing LSPs. Alternatively, before path computation, CSPF on the ingress checks whether the link administrative group attribute matches the tunnel affinity attribute.

    Hexadecimal calculation is complex, and maintaining and querying tunnels established using hexadecimal calculation are difficult. To address this issue, the NetEngine 8100 X, NetEngine 8000 X and NetEngine 8000E X allows you to assign different names (such as colors) for the 32 bits in the affinity attribute. Naming affinity attributes makes it easier to check whether the tunnel affinity attribute matches the link administrative group attribute, facilitating network planning and deployment.

An SR-MPLS TE tunnel can be established through controller-based path computation or CSPF path computation. CSPF is derived from the SPF algorithm and includes constraints. During path computation, the controller or tunnel ingress considers the preceding link resource information.

Link Administrative Group
An affinity name template can be configured to manage the mapping between affinity bits and names. On an MPLS network, you are advised to configure the same template for all nodes, because inconsistent configuration may cause a service deployment failure. As shown in Figure 1-2560, the affinity bits are named using colors. For example, bit 1 is named "red", bit 4 is "blue", and bit 6 is "brown." You can name each of the 32 affinity bits differently.
Figure 1-2560 Affinity naming example

Bits in a link administrative group must also be configured with the same names as the affinity bits.

After naming affinity bits, you can determine which links a CR-LSP can include or exclude on the ingress. Rules for selecting links for path calculation are as follows:
  • IncludeAny: CSPF includes a link when calculating a path, if at least one link administrative group bit has the same name as an affinity bit.
  • ExcludeAny: CSPF excludes a link when calculating a path, if any link administrative group bit has the same name as an affinity bit.
  • IncludeAll: CSPF includes a link when calculating a path, only if each link administrative group bit has the same name as each affinity bit.
Affinity

An affinity is a 32-bit vector that describes the links to be used by a TE tunnel. It is configured and implemented on the tunnel ingress, and used together with a link administrative group attribute to manage link selection.

After a tunnel is assigned an affinity, a device compares the affinity with the administrative group attribute during link selection. Based on the comparison result, the device determines whether to select a link with specified attributes. The link selection criteria are as follows:
  • The result of performing an AND operation between the IncludeAny affinity and the link administrative group attribute is not 0.

  • The result of performing an AND operation between the ExcludeAny affinity and the link administrative group attribute is 0.

IncludeAny = the affinity attribute value ANDed with the subnet mask value; ExcludeAny = (–IncludeAny) ANDed with the subnet mask value; the administrative group value = the administrative group value ANDed with the subnet mask value.

The following rules apply:
  • If some bits in the mask are 1, at least one bit in an administrative group attribute is 1 and its corresponding bit in the affinity attribute must be 1. If the bits in the affinity attribute are 0s, the corresponding bits in the administrative group cannot be 1.

  • If some bits in a mask are 0s, the corresponding bits in an administrative group attribute are not compared with the affinity bits.

    Figure 1-2561 uses a 16-digit attribute value as example to describe how the affinity works.

    Figure 1-2561 Attribute value

    The mask of the affinity determines the link attributes to be checked by the device. In this example, the bits with the mask of 1 are bits 11, 13, 14, and 16, indicating that these bits need to be checked. The value of bit 11 in both the affinity and the administrative group attribute of the link is 0 (not 1). In addition, the values of bits 13 and 16 in both the affinity and the administrative group attribute of the link are 1. Therefore, the link matches the affinity of the tunnel and can be selected for the tunnel.

Understand specific comparison rules before deploying devices of different vendors because the comparison rules vary with vendors.

A network administrator can use the link administrative group and tunnel affinities to control the paths over which MPLS TE tunnels are established.

SR-MPLS TE Tunnel Creation

SR-MPLS TE Tunnel

Segment Routing MPLS traffic engineering (SR-MPLS TE) tunnels are created using the SR protocol based on TE constraints.

Figure 1-2562 SR-MPLS TE tunnel

On the network shown in Figure 1-2562, two LSPs are deployed. The primary LSP traverses the PE1->P1->P2->PE2 path, and the backup LSP traverses the PE1->P3->P4->PE2 path. The two LSPs correspond to SR-MPLS TE tunnels with the same ID. Each LSP originates from the ingress, passes through transit nodes, and is terminated at the egress.

SR-MPLS TE tunnel creation involves tunnel configuration and establishment. Before SR-MPLS TE tunnels are created, IS-IS/OSPF neighbor relationships must be established between forwarders to implement network layer connectivity, allocate labels, and collect network topology information. In addition, the forwarders need to send label and network topology information to a controller for path computation. If no controller is available, constrained shortest path first (CSPF) can be enabled on the ingress of the SR-MPLS TE tunnel to allow forwarders to compute paths using CSPF.

SR-MPLS TE Tunnel Configuration

SR-MPLS TE tunnel attributes must be configured before tunnel establishment. An SR-MPLS TE tunnel can be configured on a controller or a forwarder.

  • Tunnel configuration on a controller

    After an SR-MPLS TE tunnel is configured on the controller, the controller delivers tunnel attributes to a forwarder through NETCONF (as shown in Figure 1-2563). The forwarder then delegates the tunnel to the controller through PCEP for management.

  • Tunnel configuration on a forwarder

    After an SR-MPLS TE tunnel is configured on a forwarder, the forwarder delegates the tunnel to the controller for management.

SR-MPLS TE Tunnel Establishment

If a service (for example, VPN) needs to be bound to an SR-MPLS TE tunnel, the tunnel can be established through the following process, as shown in Figure 1-2563.

Figure 1-2563 Networking for SR-MPLS TE tunnel establishment using configurations that a controller delivers to a forwarder through NETCONF
The tunnel establishment process is as follows:
  1. The controller uses SR-MPLS TE tunnel constraints and Path Computation Element (PCE) to compute a path that is similar to a common TE path. Based on the topology and adjacency labels, the controller combines the adjacency labels of the entire path into a label stack (that is, the path computation result).

    If a label stack exceeds the maximum depth supported by forwarders, the label stack is unable to carry all the adjacency labels of the path. In this case, the controller needs to allocate multiple label stacks to carry the labels.

    On the network shown in Figure 1-2563, the controller computes the PE1->P3->P1->P2->P4->PE2 path for an SR-MPLS TE tunnel. The path involves two label stacks <1003, 1006, 100> and <1005, 1009, 1010>, where label 100 is a stitching label and the others are all adjacency labels.

  2. The controller delivers tunnel configurations and label stacks to forwarders through NETCONF and PCEP, respectively.

    In Figure 1-2563, the controller delivers label stacks as follows:
    1. The controller delivers the label stack <1005, 1009, 1010> to the stitching node P1, allocates the stitching label 100, and uses it as the bottom label in the label stack to be delivered to the ingress PE1.
    2. The controller delivers the label stack <1003, 1006, 100> to the ingress PE1.
  3. The forwarders establish an SR-MPLS TE tunnel with a specific LSP based on the tunnel configurations and label stacks delivered by the controller.

Because an SR-MPLS TE tunnel does not support MTU negotiation, the MTUs configured on nodes along the SR-MPLS TE tunnel must be the same. For a manually configured SR-MPLS TE tunnel, you can use the mtu command to manually configure the MTU under the tunnel. If you do not manually configure the MTU, the default MTU value of 1500 bytes is used. For a manually configured SR-MPLS TE tunnel, the smallest value among the tunnel MTU, outbound interface MTU, and outbound interface MPLS MTU takes effect.

SR-MPLS TE Data Forwarding

Forwarders perform label operations on packets according to the label stacks corresponding to a specific SR-MPLS TE tunnel's LSP and search for outbound interfaces hop by hop according to the top label to guide packet forwarding to the destination address of the tunnel.

SR-MPLS TE Data Forwarding (Based on Adjacency Labels)
Figure 1-2564 provides an example of SR-MPLS TE data forwarding based on manually specified adjacency labels.
Figure 1-2564 SR-MPLS TE data forwarding based on adjacency labels
In Figure 1-2564, the SR-MPLS TE path computed by the controller is A->B->C->D->E->F. It corresponds to two label stacks <1003, 1006, 100> and <1005, 1009, 1010>, which are delivered to the ingress A and stitching node C, respectively. Label 100 functions as a stitching label and is associated with the label stack <1005, 1009, 1010>. The other labels are all adjacency labels. In this case, a data packet can be forwarded over the SR-MPLS TE path through the following process:
  1. The ingress A adds the label stack <1003, 1006, 100> to the packet, searches for an adjacency matching the top label 1003, finds that the corresponding outbound interface is on the A->B adjacency, and removes the label 1003. Then, the node forwards the packet carrying the label stack <1006, 100> to downstream node B over the A->B adjacency.

  2. After receiving the packet, transit node B searches for an adjacency matching the top label 1006, finds that the corresponding outbound interface is on the B->C adjacency, and removes the label 1006. Then, the node forwards the packet carrying the label stack <100> to downstream node C over the B->C adjacency.
  3. After the stitching node C receives the packet, it searches stitching label entries, determines that the top label 100 is a stitching label, and swaps the stitching label 100 with the associated label stack <1005, 1009, 1010>. After that, the node searches for an adjacency matching the new top label 1005, finds that the corresponding outbound interface is on the C->D adjacency, and removes the label 1005. Then, the node forwards the packet carrying the label stack <1009, 1010> to downstream node D over the C->D adjacency. For details about stitching nodes and stitching labels, see SR-MPLS TE.
  4. After nodes D and E receive the packet, they process it in the same way as transit node B. Node E removes the last label 1010 and forwards the packet to the egress F.
  5. As the packet received by the egress F does not contain any label, the egress F searches the routing table for further packet forwarding.

According to the preceding process, manually specifying adjacency labels enables devices to forward a packet hop by hop along the explicit path specified in the label stack of the packet. As such, this forwarding method is also called strict path-based SR-MPLS TE.

SR-MPLS TE Data Forwarding (Based on Node and Adjacency Labels)

If strict path-based SR-MPLS TE is used in a scenario where equal-cost paths exist, load balancing cannot be implemented. To address this issue, node labels are introduced to SR-MPLS TE paths.

When manually specifying a mixed label stack with both node and adjacency labels, you can specify inter-node labels. After the controller delivers a mixed label stack to the ingress forwarder through PCEP or NETCONF, the forwarder searches for the outbound interface based on the label stack, removes the top label, and forwards the packet to the next hop. The packet is forwarded hop by hop until it reaches the destination address of the involved tunnel.
Figure 1-2565 SR-MPLS TE data forwarding based on node and adjacency labels
On the network shown in Figure 1-2565, a mixed label stack <1003, 1006, 1005, 101> is manually specified for a packet to enter the specified tunnel from node A. In the label stack, labels 1003, 1006, and 1005 are adjacency labels, and label 101 is a node label.
  1. According to the top adjacency label 1003, node A finds that the corresponding outbound interface is on the A->B adjacency. Then, the node removes the label 1003 and forwards the packet to the next-hop node B.

  2. Similar to node A, node B searches for the corresponding outbound interface according to the top label 1006, removes the label, and then forwards the packet to the next-hop node C.
  3. Similar to node A, node C searches for the corresponding outbound interface according to the top label 1005, removes the label, and then forwards the packet to the next-hop node D.
  4. Node D processes the top node label 101, which indicates that load balancing needs to be performed for the traffic to node F. In this case, the traffic is hashed to corresponding links based on IP 5-tuple information.
  5. Nodes E and G perform operations according to the node label 101 and swap the label with the next-hop node label. As the two nodes are penultimate hops, they remove the label and forward the packet to node F to complete E2E traffic forwarding.

According to the preceding process, manually specifying both node and adjacency labels enables devices to forward traffic either in shortest-path mode or in load-balancing mode. The forwarding paths are not fixed, and therefore, this forwarding method is also called loose path-based SR-MPLS TE.

SR-MPLS TE Tunnel Reliability

In addition to TI-LFA FRR, the hot standby technology is also used to improve SR-MPLS TE tunnel reliability.

SR-MPLS TE Hot-Standby Protection

In hot standby mode, after the primary LSP is created, an LSP that is always in the hot-standby state is created immediately, providing end-to-end traffic protection for the entire LSP.

In Figure 1-2566, hot-standby protection is configured on the ingress node A, so that a hot-standby LSP is created immediately after the primary LSP is created on node A. In this way, two LSPs exist in the same SR-MPLS TE tunnel. When the ingress detects that the primary LSP fails, traffic is switched to the hot-standby LSP; when the primary LSP recovers, traffic is switched back to it. During this process, the SR-MPLS TE tunnel is always up.

Figure 1-2566 SR-MPLS TE hot-standby protection

BFD for SR-MPLS TE

An SR-MPLS TE LSP can be successfully established after a label stack is delivered. Moreover, it does not encounter the protocol down event, except for situations where the label stack is withdrawn. Therefore, bidirectional forwarding detection (BFD) needs to be deployed to help detect SR-MPLS TE LSP faults, which, if detected, triggers a primary/backup SR-MPLS TE LSP switchover. BFD for SR-MPLS TE is an E2E detection mechanism that rapidly detects SR-MPLS TE LSP/tunnel faults.

BFD for SR-MPLS TE

BFD for SR-MPLS TE provides the following functions:

BFD for SR-MPLS TE LSP

BFD for SR-MPLS TE LSP enables traffic to be quickly switched to a backup SR-MPLS TE LSP if the primary SR-MPLS TE LSP fails. This function supports both static and dynamic BFD sessions.
  • Static BFD session: established by manually specifying local and remote discriminators. The local discriminator of the local node must be equal to the remote discriminator of the remote node, and vice versa. After such a session is established, the intervals at which BFD packets are received and sent can be modified.
  • Dynamic BFD session: The local and remote discriminators do not need to be manually specified. After an SR-MPLS TE tunnel goes up, a BFD session is triggered. The devices on both ends of the BFD session negotiate the local discriminator, remote discriminator, and intervals at which BFD packets are received and sent.

A BFD session is bound to an SR-MPLS TE LSP, meaning that the session is established between the ingress and egress. In this session, the ingress sends a BFD packet to the egress through the LSP. After receiving the packet, the egress replies, enabling the ingress to quickly detect the link status of the LSP.

If a link fault is detected, BFD notifies the forwarding plane of the fault. The forwarding plane then searches for a backup LSP and switches traffic to it.

BFD for SR-MPLS TE Tunnel

BFD for SR-MPLS TE tunnel works with BFD for SR-MPLS TE LSP to achieve SR-MPLS TE tunnel status detection.
  • BFD for SR-MPLS TE LSP determines whether a primary/backup LSP switchover needs to be performed, whereas BFD for SR-MPLS TE tunnel checks the actual tunnel status.
    • If BFD for SR-MPLS TE tunnel is not configured, a device cannot detect the actual tunnel status because the tunnel status is up by default.

    • If BFD for SR-MPLS TE tunnel is configured but BFD is administratively down, the tunnel interface status is unknown because BFD is not working in this case.

    • If BFD for SR-MPLS TE tunnel is configured and BFD is not administratively down, the tunnel interface status is the same as the BFD status.

  • The interface status of an SR-MPLS TE tunnel is consistent with the status of BFD for SR-MPLS TE tunnel. As BFD negotiation needs to be performed, it takes time for BFD to go up. In general, if a new label stack is delivered for a tunnel in the down state, it takes approximately 10 to 20 seconds for BFD to go up. This consequently slows down hard convergence when other protection functions are not configured for the tunnel.

BFD for SR-MPLS TE (Unaffiliated Mode)

If the egress does not support BFD for SR-MPLS TE, BFD sessions cannot be created. To address this issue, configure unaffiliated BFD (U-BFD) for SR-MPLS TE.

On the ingress, enable BFD and specify the unaffiliated mode to establish a BFD session. After the BFD session is established, the ingress sends BFD packets to the egress through transit nodes along an SR-MPLS TE tunnel. After receiving the BFD packets, the forwarding plane of the egress removes MPLS labels, searches for a route based on the destination IP address of the BFD packets, and then directly loops back the BFD packets to the ingress for processing, thereby achieving detection in unaffiliated mode.

BFD for SR-MPLS TE packets are forwarded along a path specified using label stacks in the forward direction, whereas their return packets are forwarded along the shortest IP path in the reverse direction. This means that the return path of the packets is not fixed. Furthermore, if the route of BFD return packets is unreachable, the involved tunnel may go down. Therefore, you need to ensure that the route of BFD return packets is reachable. Additionally, BFD does not need to be configured for the backup SR-MPLS TE LSP. This is because the involved route re-recurses to the primary LSP after the primary LSP recovers and the tunnel goes up again.

Application of BFD for SR-MPLS TE

The following uses the network shown in Figure 1-2567 as an example to describe how BFD for SR-MPLS TE LSP is implemented in a scenario where VPN traffic recurses to an SR-MPLS TE LSP.

Figure 1-2567 BFD for SR-MPLS TE

Devices A, CE1, CE2, and E are deployed on the same VPN. CE2 advertises a route to E. PE2 assigns a VPN label to E and advertises the route to PE1. PE1 installs the route to E and the VPN label. The SR-MPLS TE LSP from PE1 to PE2 is PE1 -> P4 -> P3 -> PE2, which is the primary LSP with the label stack <9004, 9003, 9005>.

After PE1 receives a packet sent from A to E, it finds the outbound interface of the packet based on label 9004 and adds label 9003, label 9005, and the innermost VPN label to the packet. If BFD is configured and a link or P device on the primary LSP fails, PE1 can rapidly detect the fault and switch traffic to the backup LSP PE1 -> P1 -> P2 -> PE2.

SR-MPLS TE Load Balancing

SR-MPLS TE Load Balancing

SR-MPLS TE guides data packet forwarding based on the label stack information encapsulated by the ingress into data packets. By default, each adjacency label identifies a specific adjacency, meaning that load balancing cannot be performed even if equal-cost links exist. To address this issue, SR-MPLS TE uses parallel adjacency labels to identify multiple equal-cost links.

In Figure 1-2568, there are three equal-cost links from nodes B to E. Adjacency SIDs with the same value, such as 1001 in Figure 1-2568, can be configured for the links. Such SIDs are called parallel adjacency labels, which are also used for path computation like common adjacency labels.

When receiving data packets carrying parallel adjacency labels, node B parses the labels and uses the hash algorithm to load-balance the traffic over the three equal-cost links, improving network resource utilization and avoiding network congestion.

Figure 1-2568 SR-MPLS TE parallel adjacency labels

Configuring parallel adjacency labels does not affect the allocation of common adjacency labels between IGP neighbors. After parallel adjacency labels are configured, the involved device advertises multiple adjacency labels for the same adjacency.

If BFD for SR-MPLS TE is enabled and SR-MPLS TE parallel adjacency labels are used, BFD packets can be load-balanced, whereas each BFD packet is hashed to a single link. If the link fails, BFD may detect a link-down event even if the other links keep working properly. As a result, a false alarm may be reported.

DSCP-based IP Packet Steering into SR-MPLS TE Tunnels

Background

Differentiated services code point (DSCP)-based IP packet steering is a TE tunnel selection mode. Unlike the traditional TE load-balancing mode, this mode enables services to be forwarded over a specific tunnel based on the service priority, offering higher service quality to higher-priority services.

Existing networks face a challenge that they may fail to provide exclusive high-quality transmission resources for higher-priority services. This is because SR-MPLS TE tunnels are selected based on public network routes or VPN routes, which causes a device to select the same SR-MPLS TE tunnel for services with the same destination address or belonging to the same VPN instance but with different priorities.

A PE that supports DSCP-based IP packet steering can steer IP packets to corresponding tunnels based on the DSCP values in the packets. Compared with class-of-service-based tunnel selection (CBTS) which supports only eight priorities (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) and requires service traffic to be matched against the eight priorities based on a configured multi-field classification policy, DSCP-based IP packet steering directly maps IP packets' DSCP values to SR-MPLS TE tunnels and supports more refined priority management (0 to 63), allowing SR-MPLS TE to be more flexibly deployed according to service requirements.

Implementation

The DSCP attribute can be configured for tunnels to which services recurse in order to carry services of one or more priorities. Services with specified priorities can only be transmitted over corresponding tunnels, rather than being load-balanced among all tunnels to which the services may recurse. The default DSCP attribute can also be configured for tunnels to carry services of non-specified priorities.

On the network shown in Figure 1-2569, multiple SR-MPLS TE tunnels are established between PE1 and PE2. Currently, there are high-priority video services, medium-priority voice services, and common Ethernet data services. The following operations are performed to use different SR-MPLS TE tunnels to carry these services:
  • DSCP values, for example, 15–20, 5–10, and default, are configured for SR-MPLS TE tunnels.
  • According to service characteristics (DSCP values in IP packets), PE1 maps the video services to SR-MPLS TE tunnel 1, voice services to SR-MPLS TE tunnel 2, and Ethernet data services to SR-MPLS TE tunnel 3 that is configured with the default DSCP value.

    Configuring the default attribute is not mandatory for tunnels without DSCP values. If the default attribute is not configured, services that do not match the tunnels with specific DSCP attribute values will be transmitted along a tunnel that is assigned no specific DSCP attribute value. If such a tunnel does not exist, these services will be transmitted along a tunnel that is assigned the smallest DSCP attribute value.

Figure 1-2569 DSCP-based IP packet steering into SR-MPLS TE tunnels
Usage Scenario
  • When a PE functions as the ingress and the public network is configured with SR-MPLS TE or LDP over SR-MPLS TE (in the LDP over SR-MPLS TE scenario, the ingress of the LDP LSP must be the same as that of the SR-MPLS TE tunnel), IP (IPv4 and IPv6) and L3VPN services can be carried.
  • VLL, VPLS, and BGP LSP over SR-MPLS TE scenarios do not support DSCP-based IP packet steering into tunnels. P nodes do not support this function either.

Inter-AS E2E SR-MPLS TE

Binding SID

Similar to RSVP-TE tunnels, SR-MPLS TE tunnels can function as forwarding adjacencies. If an adjacency SID is allocated to an SR-MPLS TE tunnel functioning as a forwarding adjacency, the SID can identify the SR-MPLS TE tunnel. According to the SID, data traffic can be steered into the SR-MPLS TE tunnel, thereby implementing a TE policy. The adjacency SID of the SR-MPLS TE tunnel is called a binding SID. Traffic that uses the binding SID is bound to an SR-MPLS TE tunnel or a TE policy.

Binding SIDs are set on forwarders within an AS domain. Each binding SID represents an intra-AS SR-MPLS TE tunnel. In Figure 1-2570, a binding SID is set on the ingress within an AS domain.
  • Set binding SIDs to 6000 and 7000 on CSG1, representing label stacks {102, 203} and {110, 112, 213}, respectively.
  • Set a binding SID to 8000 on ASBR3 to represent a label stack {405, 506}.
  • Set a binding SID to 9000 on ASBR4 to represent a label stack {415, 516, 660}.

After binding SIDs are generated, the controller can calculate an inter-AS E2E SR-MPLS TE tunnel using the binding SIDs and BGP peer SIDs. A static explicit path can be configured so that an inter-AS E2E SR-MPLS TE tunnel is established over the path. In Figure 1-2570, the label stacks of the primary and backup LSPs in an inter-AS E2E SR-MPLS TE are {6000, 3040, 8000} and {7000, 3140, 9000}, respectively. The complete label stacks are {102, 203, 3040, 405, 506} and {110, 112, 213, 3140, 415, 516, 660}.

Figure 1-2570 Binding SID schematic diagram

A binding SID is associated with a local forwarding path by specifying a local label, and is used for NE forwarding and encapsulation. Using binding SIDs reduces the number of labels in a label stack on an NE, which helps build an inter-AS E2E SR-MPLS TE network.

E2E SR-MPLS TE Tunnel Creation

E2E SR-MPLS TE Tunnel Configuration

E2E SR-MPLS TE tunnel attributes are used to create tunnels. An E2E SR-MPLS TE tunnel can be configured on a controller or a forwarder.

  • Configure tunnels on the controller.

    After an E2E SR-MPLS TE tunnel is configured on the controller, the controller runs NETCONF to deliver tunnel attributes to a forwarder (as shown in Figure 1-2571). The forwarder runs PCEP to delegate the tunnel to the controller for management.

  • Configure tunnels on a forwarder.

    On the forwarder, you can specify a tunnel label stack based on an explicit path to manually establish an E2E SR-MPLS TE tunnel. Manual E2E SR-MPLS TE tunnel configuration is complex and cannot be automatically adjusted based on the network status. You are advised to configure a tunnel on the controller.

E2E SR-MPLS TE Tunnel Establishment
If a service (for example, VPN) needs to be bound to an SR-MPLS TE tunnel, a device establishes an E2E SR-MPLS TE tunnel based on the following process, as shown in Figure 1-2571. In the following example, the tunnel configuration on the controller is described.
Figure 1-2571 E2E SR-MPLS TE Tunnel Establishment
The process of creating a tunnel is as follows:
  1. Before creating an E2E SR-MPLS TE tunnel, the controller needs to create an SR-MPLS TE tunnel within an AS domain and has a binding SID specified for the intra-AS tunnel. Configure BGP EPE between AS domains to generate BGP peer SIDs. Then, each ASBR reports a BGP EPE label and network topology information using BGP-LS.
  2. The controller uses SR-MPLS TE tunnel constraints and Path Computation Element (PCE) to calculate paths that are similar to a TE path. Based on the topology and Adj-SIDs, the controller combines labels of the entire path into a label stack. The label stack is the calculation result.

    In Figure 1-2571, the controller calculates an SR-MPLS TE tunnel path CSG1->AGG1->ASBR1->ASBR3->P1->PE1. The label stack of the path is {6000, 3040, 8000}, where 6000 and 8000 are binding SID labels, and 3040 is a BGP peer SID.

  3. The controller runs NETCONF and PCEP to deliver tunnel configurations and the label stack to the forwarder.

    In Figure 1-2571, the process of delivering label stacks on the controller is as follows:
    1. The controller delivers the label stacks {102, 203} and {405, 506} within the AS domain to the ingress CSG1 and ASBR3, respectively.
    2. The controller delivers the E2E SR-MPLS TE tunnel label stack {6000, 3040, 8000} to CSG1 that is the ingress of an inter-AS E2E SR-MPLS TE tunnel.
  4. CSG1 establishes an inter-AS E2E SR-MPLS TE tunnel based on the tunnel configuration and label stack information delivered by the controller.

Data Forwarding on an E2E SR-MPLS TE Tunnel

A forwarder operates a label in a packet based on the label stack mapped to the SR-MPLS TE LSP, searches for an outbound interface hop by hop based on the top label of the label stack, and uses the label to guide the packet to the tunnel destination address.

In Figure 1-2572, the controller calculates an SR-MPLS TE tunnel path CSG1->AGG1->ASBR1->ASBR3->P1->PE1. The label stack of the path is {6000, 3040, 8000}, where 6000 and 8000 are binding SID labels, and 3040 is a BGP peer SID.
Figure 1-2572 Data forwarding on an E2E SR-MPLS TE tunnel
The E2E SR-MPLS TE data packet forwarding process is as follows:
  1. The ingress CSG1 adds a label stack {6000, 3040, 8000} to a data packet and searches the My Local SID table based on label 6000 on the top of the label stack. 6000 is a binding SID label and mapped to a label stack {102, 203}. CSG1 searches for an outbound interface based on label 102, maps the label to the CSG1->AGG1 adjacency, and then removes label 102. The packet carries the label stack {203, 3040, 8000} and passes through the CSG1->AGG1 adjacency to the downstream AGG1.

  2. After receiving the packet, AGG1 matches an adjacency against label 203 on the top of the label stack, finds an outbound interface as the AGG1->ASBR1 adjacency, and removes label 203. The packet carries a label stack {3040, 8000} and passes through the AGG1->ASBR1 adjacency to the downstream ASBR1.
  3. After receiving the packet, ASBR1 matches an adjacency against label 3040 on the top of the label stack, finds an outbound interface (ASBR1->ASBR3 adjacency), and removes label 3040. The packet carries a label stack {8000} and passes through the ASBR1->ASBR3 adjacency to the downstream ASBR3.
  4. After receiving the packet, ASBR3 searches the My Local SID table based on label 8000 on the top of the label stack. 8000 is a binding SID label and mapped to a label stack {405, 506}. ASBR3 searches for an outbound interface based on label 405, maps the label to the ASBR3->P1 adjacency, and then removes label 405. The packet carries a label stack {506} and passes through the ASBR3->P1 adjacency to the downstream P1.
  5. After receiving the packet, P1 matches an adjacency against label 506 on the top of the label stack, finds an outbound interface as the P1->PE1 adjacency, and removes label 506. The packet without a label is forwarded to the destination PE1 through the P1->PE1 adjacency.

Reliability of E2E SR-MPLS TE Tunnels

E2E SR-MPLS TE Hot-Standby

Hot standby (HSB) is supported by E2E SR-MPLS TE tunnels. With HSB enabled, a device creates an HSB LSP once creating a primary LSP. The HSB LSP remains in the hot standby state. The HSB LSP protects an entire LSP and provides an E2E traffic protection measure.

In Figure 1-2573, HSB is configured on the ingress CGS1. After CSG1 creates the primary LSP, it immediately creates an HSB LSP. An SR-MPLS TE tunnel contains two LSPs. If the ingress detects a primary LSP fault, the ingress switches traffic to the HSB LSP. After the primary LSP recovers, the ingress switches traffic back to the primary LSP. During the process, the SR-MPLS TE tunnel remains Up.

Figure 1-2573 E2E SR-MPLS TE HSB networking

In Figure 1-2573, the controller calculates a path CSG1->AGG1->ASBR1->ASBR3->P1->PE1 for the primary LSP of an E2E SR-MPLS TE tunnel. The path is mapped to a label stack {6000, 3040, 8000}, where 6000 and 8000 are binding SID labels, and 3040 is a BGP peer SID. The HSB LSP of the E2E SR-MPLS TE tunnel is established over the path CSG1->CSG2->AGG2->ASBR2->ASBR4->P2->PE2->PE1. This path is mapped to a label stack {7000, 3140, 9000}, where 7000 and 9000 are binding SID labels, and 3140 is a BGP peer SID.

E2E SR-MPLS TE Tunnel Protection
In Figure 1-2574, E2E SR-MPLS TE tunnel protection functions are as follows:
  1. Within an AS domain: If an E2E SR-MPLS TE LSP is faulty in an AS domain, a protection function of an intra-AS SR-MPLS TE tunnel is preferred. For example, intra-AS SR-MPLS TE hot standby and SR-MPLS TE FRR is used. For details, see Reliability of E2E SR-MPLS TE Tunnels.
  2. E2E LSP level: Within an E2E SR-MPLS TE tunnel, an HSB LSP protects the primary LSP to ensure that the E2E SR-MPLS TE tunnel status remains Up. The primary LSP is monitored using one-arm BFD for E2E SR-MPLS TE LSP that can rapidly detect faults.
  3. E2E tunnel level protection: If both the primary and HSB LSPs in an E2E SR-MPLS TE tunnel fail, one-arm BFD for E2E SR-MPLS TE tunnel quickly detects the faults and instructs the system to set the E2E SR-MPLS TE tunnel to Down. In this case, upper-layer applications, for example, VPN, can be switched to another E2E SR-MPLS TE tunnel using VPN FRR.
Figure 1-2574 E2E SR-MPLS TE tunnel protection networking

U-BFD for E2E SR-MPLS TE

E2E SR-MPLS TE does not use a protocol to establish tunnels. Once a label stack is delivered to an SR-MPLS TE node, the node establishes an SR-MPLS TE LSP. The LSP does not encounter the protocol Down state, except for the situation when a label stack is withdrawn. Therefore, bidirectional forwarding detection (BFD) needs to be deployed to help detect E2E SR-MPLS TE LSP failures, which, if detected, triggers a primary/backup SR-MPLS TE LSP switchover. BFD for E2E SR-MPLS TE is an E2E rapid detection mechanism that rapidly detects faults in links of an SR-MPLS TE tunnel. BFD for E2E SR-MPLS TE provides the following functions:
  • Unaffiliated BFD (U-BFD) for E2E SR-MPLS TE LSP: If BFD fails to go up through negotiation, an E2E SR-MPLS TE LSP consequently cannot go up. BFD for E2E SR-MPLS TE LSP enables traffic to be quickly switched to a hot-standby SR-MPLS TE LSP if the primary SR-MPLS TE LSP fails.
  • U-BFD for E2E SR-MPLS TE tunnel: BFD for E2E SR-MPLS TE tunnel works with BFD for E2E SR-MPLS TE LSP to achieve E2E SR-MPLS TE tunnel status detection.

    • BFD for E2E SR-MPLS TE LSP controls the primary/HSB LSP switchover and switchback status, whereas BFD for E2E SR-MPLS TE tunnel controls the effective status of a tunnel. If BFD for E2E SR-MPLS TE tunnel is not configured, the default tunnel status keeps Up, and the effective status cannot be determined.

    • The interface status of an E2E SR-MPLS TE tunnel keeps consistent with the status of BFD for E2E SR-MPLS TE tunnel. The BFD session goes Up slowly because of BFD negotiation. In general, if a new label stack is delivered for a tunnel in the down state, it takes approximately 10 to 20 seconds for BFD to go up. This consequently slows down hard convergence when other protection functions are not configured for the tunnel.

This example uses the network shown in Figure 1-2575 as an example to describe how U-BFD for E2E SR-MPLS TE is implemented.
  1. On the ingress, enable BFD and specify the unaffiliated mode to establish a BFD session.
  2. Establish a reverse E2E SR-MPLS TE tunnel on the egress and configure a binding SID for the tunnel.
  3. Bind the BFD session to the binding SID of the reverse E2E SR-MPLS TE tunnel on the ingress.
Figure 1-2575 U-BFD for E2E SR-MPLS TE (forwarding of BFD loopback packets over a tunnel)

After the U-BFD session is established, the ingress sends a U-BFD packet carrying the binding SID of the reverse tunnel. When the U-BFD packet arrives at the egress through transit nodes along the SR-MPLS TE tunnel, the forwarding plane of the egress removes MPLS labels from the packet, finds the associated reverse SR-MPLS TE tunnel based on the binding SID carried in the packet, and encapsulates an E2E SR-MPLS TE tunnel label stack into the return BFD packet. After the return BFD packet is looped back to the ingress through transit nodes along the SR-MPLS TE tunnel, the ingress processes the packet to implement unaffiliated loopback detection.

If the egress does not have a reverse E2E SR-MPLS TE tunnel, the egress searches for an IP route based on the destination address of the BFD packet to loop back the packet, as shown in Figure 1-2576.

Figure 1-2576 U-BFD for E2E SR-MPLS TE (forwarding of BFD loopback packets over an IP link)

Cross-Multi-AS E2E SR-MPLS TE

Theoretically, binding SIDs and BGP peer SIDs can be used to establish an explicit path across multiple AS domains (greater than or equal to three). AS domains, however, are subject to management of various organizations. When a path crosses multiple AS domains, the path also crosses the networks of multiple management organizations, which may hinder network deployment.

In Figure 1-2577, the network is connected to three AS domains. If PE1 establishes an E2E SR-MPLS TE network over a multi-hop explicit path to PE3, the traffic path from AS y to AS z can be determined in AS x. AS y and AS z, however, may belong to different management organizations than AS x. In this situation, the traffic path may not be accepted by AS y or AS z, and the great depth of the label stack decreases forwarding efficiency. AS y and AS z may be connected to a controller different than AS x, leading to a difficult in establishing an E2E SR-MPLS TE network from PE1 to PE3.

To tackle the preceding problem, a restriction is implemented on a device. If the first hop of the explicit path is a binding SID, the explicit path supports a maximum of three hops. In this way, PE1 can establish an inter-AS E2E SR-MPLS TE explicit path at most to ASBR5 or ASBR6, not to AS z. The hierarchical mode can only be used to establish an inter-AS domain E2E SR-MPLS TE tunnel from AS x to AS z.
Figure 1-2577 Cross-Multi-AS E2E SR-MPLS TE
The process of hierarchically establishing a cross-multi-AS E2E SR-MPLS TE tunnel is as follows:
  1. Layer 1: Establish an E2E SR-MPLS TE tunnel from AS y to AS z. Create an SR-MPLS TE tunnel within each of AS y and AS z. Set binding SIDs for the intra-AS tunnels, that is, binding SID3 and binding SID5, respectively. Configure BGP EPE between AS y and AS z to generate BGP peer SID4. The controller establishes an inter-AS E2E SR-MPLS TE Tunnel1 from AS y to AS z using the preceding SIDs. Set binding SID6 for this tunnel.
  2. Layer 2: Establish an E2E SR-MPLS TE tunnel from AS x to AS z. Create an SR-MPLS TE tunnel within AS x. Set a binding SID for the intra-AS tunnel, that is, binding SID1. Configure BGP EPE between AS x and AS y to generate BGP peer SID2. The controller establishes a unidirectional inter-AS E2E SR-MPLS TE Tunnel2 from AS x to AS z using binding SID1, peer SID2, and binding SID6.

An E2E SR-MPLS TE tunnel across three AS domains is established. If there are more than three AS domains, a new binding SID can be allocated to E2E SR-MPLS TE Tunnel2, and the SID participates in path computation. Repeat the preceding process to construct an E2E SR-MPLS TE tunnel that spans more AS domains.

Traffic Steering

After an SR LSP (SR-MPLS BE) or SR-MPLS TE tunnel is established, service traffic needs to be steered into the SR LSP or SR-MPLS TE tunnel, which is usually achieved by using different methods, such as configuring static routes, tunnel policies, and auto routes. Common services involved include EVPN, L2VPN, L3VPN, and public network services.

Table 1-1060 Traffic steering modes supported

Traffic Steering Mode

SR LSP

SR-MPLS TE Tunnel

Static route

SR LSPs do not have tunnel interfaces. As such, you can configure a static route with a specified next hop, based on which the route recurses to an SR LSP.

SR-MPLS TE tunnels have tunnel interfaces. This allows traffic to be steered into an SR-MPLS TE tunnel through a static route.

Tunnel policy

The tunnel select-seq policy can be used, whereas the tunnel binding policy cannot be used.

Either the tunnel select-seq policy or the tunnel binding policy can be used.

Auto route

SR LSPs do not have tunnel interfaces. Therefore, traffic cannot be steered into an SR LSP through an auto route.

SR-MPLS TE tunnels have tunnel interfaces. Therefore, traffic can be steered into an SR-MPLS TE tunnel through an auto route.

Policy-based routing (PBR)

SR LSPs do not have tunnel interfaces. Therefore, traffic cannot be steered into an SR LSP through PBR.

SR-MPLS TE tunnels have tunnel interfaces. Therefore, traffic can be steered into an SR-MPLS TE tunnel through PBR.

Static Route

SR LSPs do not have tunnel interfaces. As such, you can configure a static route with a specified next hop, based on which the route recurses to an SR LSP.

The static routes of SR-MPLS TE tunnels work in the same way as common static routes. When configuring a static route, set the outbound interface of the route to an SR-MPLS TE tunnel interface so that traffic transmitted over the route can be steered into the SR-MPLS TE tunnel.

Tunnel Policy

By default, VPN traffic is forwarded over LDP LSPs, not SR LSPs or SR-MPLS TE tunnels. If forwarding VPN traffic over LDP LSPs does not meet VPN traffic requirements, a tunnel policy needs to be used to steer the VPN traffic into an SR LSP or SR-MPLS TE tunnel.

Currently, the supported tunnel policies are tunnel select-seq and tunnel binding. Select either of them as needed.

  • Tunnel select-seq: This policy can change the type of tunnel selected for VPN traffic. An SR LSP or SR-MPLS TE tunnel is selected as a public tunnel for VPN traffic based on the prioritized tunnel types. If no LDP LSPs are available, SR LSPs are selected by default.

  • Tunnel binding: This policy binds a specific destination address to an SR-MPLS TE tunnel for VPN traffic to guarantee QoS.

Auto Route

In auto route mode, an SR-MPLS TE tunnel participates in IGP route computation as a logical link, and the tunnel interface is used as an outbound interface of the involved route. Currently, the forwarding adjacency mode is supported, enabling a node to advertise an SR-MPLS TE tunnel to its neighboring nodes so that the SR-MPLS TE tunnel will be used in global route calculation and can be used by both the local node and other nodes.

PBR

PBR allows nodes to select routes based on user-defined policies, which improves traffic security and balances traffic. On an SR network, PBR enables IP packets that meet filter criteria to be forwarded over specific LSPs.

Similar to IP unicast PBR, SR-MPLS TE PBR is also implemented by defining a set of matching rules and behaviors through if-match clauses and apply clauses (the outbound interface is set to the involved SR-MPLS TE tunnel interface), respectively. If packets do not match PBR rules, they are forwarded in IP forwarding mode; if they match PBR rules, they are forwarded over specific tunnels.

Public Network IP Route Recursion to an SR Tunnel

Public Network BGP Route Recursion to an SR Tunnel

When users access the Internet, if IP forwarding is implemented for packets, core devices between the users and the Internet are forced to learn a large number of Internet routes. This puts huge strain on the core devices and negatively impacts their performance. To address this issue, non-labeled public network BGP or static route recursion to an SR tunnel can be configured on the user access device, enabling packets to be forwarded over the SR tunnel when users access the Internet, and preventing the problems caused by insufficient performance, heavy load, and service processing on core devices.

On the network shown in Figure 1-2578, public network BGP route recursion to an SR LSP can be achieved through the following process:
  • An IGP and SR are deployed on each PE and P in E2E mode, and an SR LSP is established.
  • A BGP peer relationship is established between the PEs for them to learn the routes of each other.
  • The BGP route recurses to the SR LSP on PE1.
Figure 1-2578 Public network BGP route recursion to an SR LSP
On the network shown in Figure 1-2579, public network BGP route recursion to an SR-MPLS TE tunnel can be achieved through the following process:
  • An IGP and SR are deployed on each PE and P in E2E mode, and an SR-MPLS TE tunnel is established.
  • A BGP peer relationship is established between the PEs for them to learn the routes of each other.
  • A tunnel policy is configured on the PEs so that the BGP route recurses to the SR-MPLS TE tunnel on PE1.
Figure 1-2579 Public network BGP route recursion to an SR-MPLS TE tunnel
Static Route Recursion to an SR Tunnel

The next hops of static routes may not be directly reachable. In this case, recursion is required so that such routes can be used for traffic forwarding. If static routes are allowed to recurse to SR tunnels, label-based forwarding can be performed.

On the network shown in Figure 1-2580, static route recursion to an SR LSP can be achieved through the following process:
  • An IGP and SR are deployed on each PE and P in E2E mode, and an SR LSP destined for PE2's loopback interface address is established on PE1.
  • A static route is configured on PE1, and PE2's loopback interface address is specified as the next hop of the route.
  • After receiving an IP packet, PE1 encapsulates a label into the packet and forwards the packet over the SR LSP.
Figure 1-2580 Static route recursion to an SR LSP
On the network shown in Figure 1-2581, static route recursion to an SR-MPLS TE tunnel can be achieved through the following process:
  • An IGP and SR are deployed on each PE and P in E2E mode, and an SR-MPLS TE tunnel destined for PE2's loopback interface address is established on PE1.
  • A static route is configured on PE1, and PE2's loopback interface address is specified as the next hop of the route.
  • A tunnel policy is configured on the PEs so that the static route recurses to the SR-MPLS TE tunnel. After receiving an IP packet in this case, PE1 encapsulates labels into the packet and forwards the packet over the SR-MPLS TE tunnel.
Figure 1-2581 Static route recursion to an SR-MPLS TE tunnel

L3VPN Route Recursion to an SR Tunnel

Basic VPN Route Recursion to an SR Tunnel

When users access the Internet, if IP forwarding is implemented for packets, core devices between the users and the Internet are forced to learn a large number of Internet routes. This puts huge strain on the core devices and negatively impacts their performance. To solve this problem, VPN route recursion to an SR tunnel can be configured so that the users can access the Internet through the tunnel.

The network shown in Figure 1-2582 consists of discontinuous L3VPNs that span a backbone network. An SR LSP is established between PEs to forward L3VPN packets. The PEs use BGP to learn VPN routes. The network is deployed as follows:
  • An IS-IS neighbor relationship is established between the PEs to implement route reachability.
  • A BGP peer relationship is established between the PEs for them to learn the VPN routes of each other.
  • An IS-IS SR tunnel is established between the PEs to assign public network labels and compute an LSP.
  • BGP is used to assign a VPN label, for example, label Z, to a VPN instance.
  • The VPN route recurses to the SR LSP.
  • After receiving an IP packet, PE1 encapsulates a VPN label and an SR public network label into the packet, and forwards the packet along the LSP.
Figure 1-2582 Basic VPN route recursion to an SR LSP
The network shown in Figure 1-2583 consists of discontinuous L3VPNs that span a backbone network. An SR-MPLS TE tunnel is established between PEs to forward L3VPN packets. The PEs use BGP to learn VPN routes. The network is deployed as follows:
  • An IS-IS neighbor relationship is established between the PEs to implement route reachability.
  • A BGP peer relationship is established between the PEs for them to learn the VPN routes of each other.
  • An IS-IS SR-MPLS TE tunnel is established between the PEs to assign public network labels and compute an LSP.
  • BGP is used to assign a VPN label, for example, label Z, to a VPN instance.
  • A tunnel policy is configured on the PEs to allow for VPN route recursion to SR-MPLS TE tunnels.
  • After receiving an IP packet, PE1 encapsulates a VPN label and an SR public network label into the packet, and forwards the packet along the LSP.
Figure 1-2583 Basic VPN route recursion to an SR-MPLS TE tunnel
HVPN Route Recursion to an SR Tunnel

On a growing network with increasing types of services, PEs encounter scalability problems, such as insufficient access or routing capabilities, which reduces network performance and scalability. In this situation, VPNs cannot be deployed on a large scale. As shown in Figure 1-2584, HVPN is a hierarchical VPN where multiple PEs function as different roles to provide functions that a single PE provides, thereby lowering the performance requirements for PEs.

Figure 1-2584 HVPN route recursion to an SR LSP
The network shown in Figure 1-2584 is deployed as follows:
  • BGP peer relationships are established between the UPE and SPE and between the SPE and NPE. An SR LSP is established.
  • A VPNv4 route recurses to the SR LSP on the UPE.
The process of forwarding HVPN packets from CE1 to CE2 is as follows:
  1. CE1 sends a VPN packet to the UPE.

  2. After receiving the packet, the UPE searches its VPN forwarding table for an LSP to forward the packet based on the destination address of the packet. Then, the UPE adds an inner label L4 and an outer SR public network label Lv to the packet and sends the packet to the SPE over the corresponding LSP. The label stack is L4/Lv.

  3. After receiving the packet, the SPE replaces the outer SR public network label Lv with Lu and the inner label L2 with L3. Then, the SPE sends the packet to the NPE over the same LSP.

  4. After receiving the packet, the NPE removes the outer SR public network label Lu, searches for the corresponding VPN instance based on the inner label L3, and searches the VPN forwarding table of this VPN instance for the outbound interface based on the destination address of the packet. Then, the NPE sends the packet (pure IP packet) through the outbound interface to CE2.

VPN FRR

As shown in Figure 1-2585, PE1 receives routes advertised by both PE3 and PE4 and preferentially selects the route advertised by PE3. The route advertised by PE4 is used as the sub-optimal route. PE1 adds both the optimal route and sub-optimal route to the forwarding table, with the optimal route being used to guide traffic forwarding and the sub-optimal route being used for backup.

Figure 1-2585 VPN FRR networking
Table 1-1061 Typical fault-triggered switching scenarios

Failure Point

Protection Switching

P1-to-P3 link failure

TI-LFA local protection takes effect, and traffic is switched to LSP2 shown in Figure 1-2585.

PE3 node failure

BFD for SR-MPLS TE or SBFD for SR-MPLS can detect the failure, and BFD/SBFD triggers VPN FRR switching to LSP3 shown in Figure 1-2585.

L2VPN Route Recursion to an SR Tunnel

Figure 1-2586 shows a typical VPLS network. On the network, users in different geographical regions communicate with each other through different PEs. From the perspective of users, a VPLS network is a Layer 2 switched network that allows them to communicate with each other in a way similar to communication over a LAN. After VPLS route recursion to an SR tunnel is configured, virtual connections are established between the sites of each VPN, and public SR tunnels are established, Layer 2 packets can be forwarded over the SR tunnels.

VPLS Route Recursion to an SR LSP
As shown in Figure 1-2586, the process of VPLS route recursion to an SR LSP is as follows:
  • CE1 sends a packet with Layer 2 encapsulation to PE1.
  • PE1 establishes an E2E SR LSP with PE2.
  • A tunnel policy is configured on PE1 for SR LSP selection, and VSI forwarding entries are associated with SR forwarding entries.
  • After receiving the packet, PE1 searches for a VSI entry and selects a tunnel and PW based on the entry. Then, PE1 adds an outer LSP label and inner VC label to the packet based on the selected tunnel and PW, performs Layer 2 encapsulation, and forwards the packet to PE2.
  • PE2 receives the packet from PE1 and decapsulates it.
  • PE2 forwards the original packet to CE2.
Figure 1-2586 VPLS route recursion to an SR LSP

The processes of HVPLS route recursion to an SR LSP and VLL route recursion to an SR LSP are similar to the process of VPLS route recursion to an SR LSP and therefore are not described.

VPLS Route Recursion to an SR-MPLS TE Tunnel
As shown in Figure 1-2587, the process of VPLS route recursion to an SR-MPLS TE tunnel is as follows:
  • CE1 sends a packet with Layer 2 encapsulation to PE1.
  • PE1 establishes an E2E SR-MPLS TE tunnel with PE2.
  • A tunnel policy is configured on PE1 for SR-MPLS TE tunnel selection, and VSI forwarding entries are associated with SR forwarding entries.
  • After receiving the packet, PE1 searches for a VSI entry and selects a tunnel and PW based on the entry. Then, PE1 adds an outer SR-MPLS TE tunnel label and inner VC label to the packet based on the selected tunnel and PW, performs Layer 2 encapsulation, and forwards the packet to PE2.
  • PE2 receives the packet from PE1 and decapsulates it.
  • PE2 forwards the original packet to CE2.
Figure 1-2587 VPLS route recursion to an SR-MPLS TE tunnel

The processes of HVPLS route recursion to an SR-MPLS TE tunnel and VLL route recursion to an SR-MPLS TE tunnel are similar to the process of VPLS route recursion to an SR-MPLS TE tunnel and therefore are not described.

EVPN Route Recursion to an SR Tunnel

Ethernet Virtual Private Network (EVPN) is a VPN technology used for Layer 2 or Layer 3 interworking. Similar to BGP/MPLS IP VPN, EVPN extends BGP and uses extended reachability information to implement control-plane instead of data-plane MAC address learning and advertisement between different sites' Layer 2 networks. Compared with VPLS, EVPN tackles the load imbalance and high network resource consumption problems occurring on VPLS networks.

EVPN can be used for both L2VPN and L3VPN. This section uses EVPN L2VPN as an example.

EVPN Route Recursion to an SR LSP
As shown in Figure 1-2588, after a PE learns a MAC address from another site and successfully establishes an SR LSP with the site over the public network, the PE can transmit unicast packets to the site. The detailed transmission process is as follows:
  1. CE1 sends a unicast packet to PE1 in Layer 2 forwarding mode.
  2. After receiving the packet, PE1 encapsulates a VPN label carried in a MAC entry and a public SR LSP label in sequence and then sends the encapsulated packet to PE2.

    For an EVPN L2VPN, a VPN label usually refers to the label of an EVPN instance. If an EVPN instance consists of multiple broadcast domains, the VPN label can be the label of a specific EVPN broadcast domain. After receiving a packet carrying such a label, the remote PE identifies the broadcast domain based on the label and searches the MAC address table in the broadcast domain for packet forwarding.

  3. After receiving the unicast packet, PE2 decapsulates the packet, removes the VPN label, and searches the VPN MAC address table for a matching outbound interface.
Figure 1-2588 EVPN route recursion to an SR LSP for unicast traffic forwarding
EVPN Route Recursion to an SR-MPLS TE Tunnel
As shown in Figure 1-2589, after a PE learns a MAC address from another site and successfully establishes an SR-MPLS TE tunnel with the site over the public network, the PE can transmit unicast packets to the site. The detailed transmission process is as follows:
  1. CE1 sends a unicast packet to PE1 in Layer 2 forwarding mode.
  2. After receiving the packet, PE1 encapsulates a VPN label carried in a MAC entry and a public SR-MPLS TE tunnel label in sequence and then sends the encapsulated packet to PE2.
  3. After receiving the unicast packet, PE2 decapsulates the packet, removes the VPN label, and searches the VPN MAC address table for a matching outbound interface.
Figure 1-2589 EVPN route recursion to an SR-MPLS TE tunnel for unicast traffic forwarding

SBFD for SR-MPLS

Bidirectional forwarding detection (BFD) techniques are mature. When a large number of BFD sessions are configured to monitor links, the negotiation time of the existing BFD state machine is lengthened. In this situation, seamless bidirectional forwarding detection (SBFD) can be configured to monitor SR tunnels. It is a simplified BFD state machine that shortens the negotiation time and improves network-wide flexibility.

SBFD Principles
Figure 1-2590 shows SBFD principles. Before link detection, an initiator and a reflector exchange SBFD control packets to notify each other of SBFD parameters, for example, discriminators. During link detection, the initiator proactively sends an SBFD Echo packet, and the reflector loops this packet back. The initiator then determines the local state based on the looped-back packet.
  • The initiator is responsible for detection and runs both an SBFD state machine and a detection mechanism. Because the state machine has only up and down states, the initiator can send packets carrying only the up or down state and receive packets carrying only the up or AdminDown state.

    The initiator starts by sending an SBFD packet carrying the down state to the reflector. The destination and source port numbers of the packet are 7784 and 4784, respectively; the destination IP address is 127.0.0.1; the source IP address is the locally configured LSR ID.

  • The reflector does not have any SBFD state machine or detection mechanism. For this reason, it does not proactively send SBFD Echo packets, but rather, it only reflects SBFD packets.

    After receiving an SBFD packet from the initiator, the reflector checks whether the SBFD discriminator carried in the packet matches the locally configured global SBFD discriminator. If they do not match, the packet is discarded. If they match and the reflector is in the working state, the reflector reflects back the packet. If they match but the reflector is not in the working state, the reflector sets the state to AdminDown in the packet.

    The destination and source port numbers in the looped-back packet are 4784 and 7784, respectively; the source IP address is the locally configured LSR ID; the destination IP address is the source IP address of the initiator.

Figure 1-2590 SBFD Principles

SBFD Return Packet Forwarding over a Tunnel

In an SBFD for SR-MPLS scenario, the packet from an SBFD initiator is forwarded to an SBFD reflector along an SR-MPLS LSP, and the return packet from the SBFD reflector is forwarded to the SBFD initiator along a multi-hop IP path or an SR-MPLS tunnel. If the return packet is forwarded along the shortest multi-hop IP path, multiple SR-MPLS LSPs may share the same SBFD return path. In this case, if the SBFD return path fails, all SBFD sessions go down, causing service interruptions. Services can recover only after the SBFD return path converges and SBFD goes up again.

If SBFD return packet forwarding over a tunnel is supported:

  • The SBFD packet sent by the initiator carries the binding SID of the SR-MPLS TE tunnel on the reflector. If the SR-MPLS TE tunnel has primary and backup LSPs, the SBFD packet also carries the Primary/Backup LSP flag.
  • When constructing a loopback packet, the reflector adds the binding SID carried in the SBFD packet sent by the initiator to the loopback SBFD Echo packet. In addition, depending on the Primary/Backup LSP flag carried in the SBFD packet, the reflector determines whether to steer the loopback SBFD Echo packet to the primary or backup LSP of the SR-MPLS TE tunnel. This ensures that the SBFD session status reflects the actual link status. In real-world deployment, make sure that the forward and reverse tunnels share the same LSP.

In an inter-AS SR-MPLS TE tunnel scenario, if SBFD return packets are forwarded over the IP route by default, the inter-AS IP route may be unreachable, causing SBFD to go down. In this case, you can configure the SBFD return packets to be forwarded over the SR-MPLS TE tunnel.

SBFD State Machine on the Initiator
The initiator's SBFD state machine has only two states (up and down) and therefore can only switch between these two states. Figure 1-2591 shows how the SBFD state machine works.
Figure 1-2591 SBFD state machine on the initiator
  • Initial state: The initiator sets the initial state to Down in an SBFD packet to be sent to the reflector.
  • Status migration: After receiving a looped packet carrying the Up state, the initiator sets the local status to Up. After the initiator receives a looped packet carrying the Admin Down state, the initiator sets the local status to Down. If the initiator does not receive a packet looped by the reflector before the timer expires, the initiator also sets the local status to Down.
  • Status holding: When the initiator is in the Up state and receives a looped packet carrying the Up state, the initiator remains the local state of Up. When the initiator is in the Down state and receives a looped packet carrying the Admin Down state or receives no packet after the timer expires, the initiator remains the local state of Down.
Typical SBFD Applications

SBFD for SR-MPLS BE (SR LSP) and SBFD for SR-MPLS TE are typically used in SBFD for SR-MPLS scenarios.

SBFD for SR-MPLS BE

Figure 1-2592 shows a scenario where SBFD for SR-MPLS BE is deployed. Assume that the SRGB range for all the PEs and Ps in Figure 1-2592 is [16000–16100]. The SR-MPLS BE path is PE1->P4->P3->PE2.

With SBFD enabled, if a link or a P device on the primary path fails, PE1 rapidly detects the failure and switches traffic to another path, such as the VPN FRR protection path.

Figure 1-2592 SBFD for SR-MPLS BE networking

SBFD for SR-MPLS TE LSP

Figure 1-2593 shows a scenario where SBFD for SR-MPLS TE LSP is deployed. The primary LSP of the SR-MPLS TE tunnel from PE1 to PE2 is PE1->P4->P3->PE2, which corresponds to the label stack {9004, 9003, 9005}. The backup LSP is PE1->P1->P2->PE2.

Figure 1-2593 SBFD for SR-MPLS TE LSP networking

After SBFD is configured, PE1 rapidly detects a failure and switches traffic to a backup SR-MPLS TE LSP once a link or P on the primary LSP fails.

SBFD for SR-MPLS TE Tunnel

SBFD for SR-MPLS TE LSP determines whether a primary/backup LSP switchover needs to be performed, whereas SBFD for SR-MPLS TE tunnel checks the actual tunnel status.
  • If SBFD for SR-MPLS TE tunnel is not configured, the default tunnel status keeps Up, and the effective status cannot be determined.

  • If SBFD for SR-MPLS TE tunnel is configured but SBFD is administratively down, the tunnel interface status is unknown because SBFD is not working in this case.

  • If SBFD for SR-MPLS TE tunnel is configured and SBFD is not administratively down, the tunnel interface status is the same as the SBFD status.

SR-MPLS TE Policy

Segment Routing Policy (SR-MPLS TE Policy) is a tunneling technology developed based on SR. An SR-MPLS TE Policy is a set of candidate paths consisting of one or more segment lists, that is, segment ID (SID) lists. Each SID list identifies an end-to-end path from the source to the destination, instructing a device to forward traffic through the path rather than the shortest path computed using an IGP. The header of a packet steered into an SR-MPLS TE Policy is augmented with an ordered list of segments associated with that SR-MPLS TE Policy, so that other devices on the network can execute the instructions encapsulated into the list.

An SR-MPLS TE Policy consists of the following parts:
  • Headend: node where an SR-MPLS TE Policy is generated.

  • Color: extended community attribute of an SR-MPLS TE Policy. A BGP route can recurse to an SR-MPLS TE Policy if the route has the same color attribute as the policy.

  • Endpoint: destination address of an SR-MPLS TE Policy.

Color and endpoint information is added to an SR-MPLS TE Policy through configuration. In path computation, a forwarding path is computed based on the color attribute that meets SLA requirements. The headend steers traffic into an SR-MPLS TE Policy whose color and endpoint attributes match the color value and next-hop address in the associated route, respectively. The color attribute defines an application-level network SLA policy. This allows network paths to be planned based on specific SLA requirements for services, realizing service value in a refined manner and helping build new business models.

SR-MPLS TE Policy Model

Figure 1-2594 shows the SR-MPLS TE Policy model. One SR-MPLS TE Policy can contain multiple candidate paths with the preference attribute. The valid candidate path with the highest preference functions as the primary path of the SR-MPLS TE Policy, and the valid candidate path with the second highest preference functions as the hot-standby path.

A candidate path can contain multiple segment lists, each of which carries a Weight attribute. Each segment list is an explicit label stack that instructs a network device to forward packets. Multiple segment lists can work in load balancing mode.

Figure 1-2594 SR-MPLS TE Policy model
BGP SR-MPLS TE Policy Extension
The BGP SR-MPLS TE Policy extension provides the following functions:
  1. Transmits the SR-MPLS TE Policy dynamically generated on a controller to a forwarder through BGP extension. This function is implemented through the newly defined BGP SR-MPLS TE Policy subsequent address family identifier (SAFI). By establishing a BGP SR-MPLS TE Policy address family-specific peer relationship between the controller and forwarder, this function enables the controller to deliver the SR-MPLS TE Policy to the forwarder, enhancing the capability of automatic SR-MPLS TE Policy deployment.
  2. Supports the new extended community attribute (color attribute) and transmits the attribute between BGP peers. This function is implemented through the BGP network layer reachability information (NLRI) extension.
The format of the NLRI used by a BGP SR-MPLS TE Policy address family is as follows:
+------------------+
|    NLRI Length   | 1 octet
+------------------+
|   Distinguisher  | 4 octets
+------------------+
|   Policy Color   | 4 octets
+------------------+
|     Endpoint     | 4 or 16 octets
+------------------+
The meaning of each field is as follows:
  • NLRI Length: length of the NLRI
  • Distinguisher: distinguisher of the NLRI
  • Policy Color: color value of the associated SR-MPLS TE Policy
  • Endpoint: endpoint information about the associated SR-MPLS TE Policy

The NLRI containing SR-MPLS TE Policy information is carried in a BGP Update message. The Update message must carry mandatory BGP attributes and can also carry any optional BGP attributes.

The content of an SR-MPLS TE Policy is encoded using the new Tunnel-Type TLV in tunnel encapsulation attributes. The encoding format is as follows:
SR-MPLS TE Policy SAFI NLRI: <Distinguisher, Policy Color, Endpoint>
Attributes:
   Tunnel Encaps Attribute (23)
      Tunnel Type: SR-MPLS TE Policy
          Binding SID
          Preference
          ...
          Policy Name      
          Policy Candidate Path Name
          Segment List
              Weight
              Segment
              Segment
              ...
          ...
The meaning of each field is as follows:
  • Binding SID: binding SID of an SR-MPLS TE Policy.
  • Preference: SR-MPLS TE Policy selection preference based on which SR-MPLS TE Policies are selected.
  • Policy Name: name of an SR-MPLS TE Policy.
  • Policy Candidate Path Name: name of an SR-MPLS TE Policy's candidate path.
  • Segment List: list of segments, which contains Weight attribute and segment information. One SR-MPLS TE Policy can contain multiple segment lists, and one segment list can contain multiple segments.

The Tunnel-Type TLV consists of the following sub-TLVs.

Preference Sub-TLV

Figure 1-2595 shows the format of the Preference Sub-TLV.
Figure 1-2595 Preference Sub-TLV
Table 1-1062 Fields in the Preference Sub-TLV

Field

Length

Description

Type

8 bits

TLV type.

Length

8 bits

Packet length.

Flags

8 bits

Flags field. Currently, this field is not in use.

Preference

32 bits

SR-MPLS TE Policy preference.

Policy Name Sub-TLV

The Policy Name Sub-TLV is used to attach a name to an SR-MPLS TE Policy. It is optional. Figure 1-2596 shows its format.

Figure 1-2596 Policy Name Sub-TLV

Table 1-1063 describes the fields in the Policy Name Sub-TLV.

Table 1-1063 Fields in the Policy Name Sub-TLV

Field

Length

Description

Type

8 bits

Sub-TLV type. The value is 130.

Length

16 bits

Length.

Reserved

8 bits

Reserved field. It must be set to 0 before transmission and ignored upon receipt.

Policy Name

Variable

Name of an SR-MPLS TE Policy. The value is a string of printable ASCII characters excluding the Null terminator.

Policy Candidate Path Name Sub-TLV

The Policy Candidate Path Name Sub-TLV is used to attach a name to an SR-MPLS TE Policy's candidate path. It is optional. Figure 1-2597 shows its format.

Figure 1-2597 Policy Candidate Path Name Sub-TLV

Table 1-1064 describes the fields in the Policy Candidate Path Name Sub-TLV.

Table 1-1064 Fields in the Policy Candidate Path Name Sub-TLV

Field

Length

Description

Type

8 bits

Sub-TLV type. The value is 129.

Length

16 bits

Length.

Reserved

8 bits

Reserved field. It must be set to 0 before transmission and ignored upon receipt.

Policy Candidate Path Name

Variable

Name of an SR-MPLS TE Policy's candidate path. The value is a string of printable ASCII characters excluding the Null terminator.

Binding SID Sub-TLV

Figure 1-2598 shows the format of the Binding SID Sub-TLV.
Figure 1-2598 Binding SID Sub-TLV
Table 1-1065 Fields in the Binding SID Sub-TLV

Field

Length

Description

Type

8 bits

TLV type.

Length

8 bits

Packet length. It does not contain Type and Length fields. Available values are as follows:
  • 2: No binding SID is contained.
  • 6: An IPv4 binding SID is contained.

Flags

8 bits

Flags field. The encoding format is as follows:
Figure 1-2599 Flags field
  • S: S-Flag encoding the Specified-BSID-only behavior

Binding SID

32 bits

Binding SID of the SR-MPLS TE Policy. The following figure shows the format.
Figure 1-2600 Binding SID
The Label field indicates a label that occupies 20 bits. The other field occupies 12 bits and is not in use currently.

Segment List Sub-TLV

Figure 1-2601 shows the format of the Segment List Sub-TLV.
Figure 1-2601 Segment List Sub-TLV
Table 1-1066 Fields in the Segment List Sub-TLV

Field

Length

Description

Type

8 bits

TLV type.

Length

16 bits

Packet length.

sub-TLVs

Variable length

Sub-TLVs that contain the optional Weight Sub-TLV and zero or more Segment Sub-TLVs.

Figure 1-2602 shows the format of the Weight Sub-TLV.
Figure 1-2602 Weight Sub-TLV
The meaning of each field is as follows:
  • Type: TLV type of 8 bits
  • Length: TLV length of 8 bits
  • Flags: flags of 8 bits (not in use currently)
  • Weight: weight value of a segment list.
Figure 1-2603 shows the format of the Segment Sub-TLV.
Figure 1-2603 Segment Sub-TLV in the form of MPLS label
The meaning of each field is as follows:
  • Type: TLV type of 8 bits
  • Length: TLV length of 8 bits
  • Flags: flags of 8 bits (not in use currently)
  • Label: label value of 24 bits
  • TC: traffic class of 3 bits
  • S: bottom of stack of 1 bit
  • TTL: TTL value of 8 bits

Extended Color Community

Figure 1-2604 shows the format of the Extended Color Community.
Figure 1-2604 Extended Color Community
Table 1-1067 Fields in the Extended Color Community

Field

Length

Description

CO

8 bits

Color-Only bits indicating that the Extended Color Community is used to steer traffic into an SR-MPLS TE Policy. The current value is 00, indicating that traffic can be steered into an SR-MPLS TE Policy only when the color value and next-hop address in the BGP route match the color and endpoint attributes of the SR-MPLS TE Policy, respectively.

Color Value

32 bits

Extended Color Community attribute value.

BGP itself does not generate SR-MPLS TE Policies. Instead, it receives SR-MPLS TE Policy NLRIs from a controller to generate SR-MPLS TE Policies. Upon reception of an SR-MPLS TE Policy NLRI, BGP must determine whether the NLRI is acceptable based on the following rules:
  • The SR-MPLS TE Policy NLRI must contain Distinguisher, Policy Color, and Endpoint attributes.
  • The Update message carrying the SR-MPLS TE Policy NLRI must have either the NO_ADVERTISE community or at least one route-target extended community in IPv4 address format.
  • The Update message carrying the SR-MPLS TE Policy NLRI must have one Tunnel Encapsulation Attribute.
MTU for SR-MPLS TE Policy

If the actual forwarding path of an SR-MPLS TE Policy involves different physical link types, the links may support different MTUs. If this is the case, the headend must implement MTU control properly to prevent packets from being fragmented or discarded during transmission. Currently, no IGP/BGP-oriented MTU transmission or negotiation mechanism is available. You can configure MTUs for SR-MPLS TE Policies as needed.

MBB and Delayed Deletion for SR-MPLS TE Policies

SR-MPLS TE Policies support make-before-break (MBB). With the MBB function, a forwarder can establish a new segment list before removing the original one during a segment list update. In the establishment process, traffic is still forwarded through the original segment list, preventing packet loss upon a segment list switchover.

SR-MPLS TE Policies also support delayed deletion. With a segment list deletion delay being configured for an SR-MPLS TE Policy, if the SR-MPLS TE Policy has a segment list with a higher preference, a segment list switchover may be performed. In this case, the original segment list that is up can still be used and is deleted only when the delay expires. This prevents traffic interruptions during the segment list switchover.

Delayed deletion takes effect only for up segment lists (including backup segment lists) in an SR-MPLS TE Policy. If seamless bidirectional forwarding detection (SBFD) detects a segment list failure or does not obtain the segment list status, the segment list is considered invalid and then immediately deleted without any delay.

SR-MPLS TE Policy Creation

An SR-MPLS TE Policy can be manually configured on a forwarder through CLI or NETCONF. Alternatively, it can be delivered to a forwarder after being dynamically generated by a protocol, such as BGP, on a controller. The dynamic mode facilitates network deployment. If SR Policies generated in both modes exist, the forwarder selects an SR-MPLS TE Policy based on the following rules in descending order:
  • Protocol-Origin: The default value of Protocol-Origin is 20 for a BGP-delivered SR-MPLS TE Policy and is 30 for a manually configured SR-MPLS TE Policy. A larger value indicates a higher preference.
  • <ASN, node-address> tuple: ASN indicates an AS number. For both ASN and node-address, a smaller value indicates a higher preference. The ASN and node-address values of a manually configured SR-MPLS TE Policy are fixed at 0 and 0.0.0.0, respectively.
  • Discriminator: A larger value indicates a higher preference. For a manually configured SR-MPLS TE Policy, the value of Discriminator is the same as the preference value.
Manual SR-MPLS TE Policy Configuration
You can manually configure an SR-MPLS TE Policy through CLI or NETCONF. For a manually configured SR-MPLS TE Policy, information, such as the endpoint and color attributes, the preference values of candidate paths, and segment lists, must be configured. Moreover, the preference values must be unique. The first-hop label of a segment list can be a node, adjacency, BGP EPE, parallel, or anycast SID, but cannot be any binding SID. Ensure that the first-hop label is a local incoming label, so that the forwarding plane can use this label to search the local forwarding table for the corresponding forwarding entry.
Figure 1-2605 Manually configured SR-MPLS TE Policy
SR-MPLS TE Policy Delivery by a Controller
Figure 1-2606 shows the process in which a controller dynamically generates and delivers an SR-MPLS TE Policy to a forwarder. The process is as follows:
  1. The controller collects information, such as network topology and label information, through BGP-LS.
  2. The controller and headend forwarder establish a BGP peer relationship in the IPv4 SR-MPLS TE Policy address family.
  3. The controller computes an SR-MPLS TE Policy and delivers it to the headend forwarder through the BGP peer relationship. The headend forwarder then generates SR-MPLS TE Policy entries.
Figure 1-2606 Controller-delivered SR-MPLS TE Policy
ODN Mode for SR-MPLS TE Policies

In traditional color-based traffic steering mode, routes recurse to SR-MPLS TE Policies based on their color extended community attributes and next hops, and the SR-MPLS TE Policies must have been created on the headend device before the recursion. During route recursion, the headend device recurses the BGP route to an existing SR-MPLS TE Policy based on the color extended community attribute and next hop of this route.

Figure 1-2607 provides an example for creating an SR-MPLS TE Policy in on-demand next hop (ODN) mode. This mode does not require a large number of SR-MPLS TE Policies to be configured in advance. Instead, it enables SR-MPLS TE Policy creation to be dynamically triggered on demand based on service routes, simplifying network operations. During SR-MPLS TE Policy creation, you can select a pre-configured attribute template and constraint template to ensure that the to-be-created SR-MPLS TE Policy meets service requirements.

Figure 1-2607 Creating an SR-MPLS TE Policy in ODN mode

The process of creating an SR-MPLS TE Policy in ODN mode is as follows:

  1. CE2 advertises a service route to PE3.
  2. PE3 adds the color extended community attribute to the route. This attribute is used to specify the SLA requirements of a route, such as requiring a low-latency or high-bandwidth path.
  3. PE3 advertises the route to PE1 through the BGP peer relationship, with the next hop being PE3 and the color value being 30.
  4. A group of ODN templates, each corresponding to an expected color to represent an SLA requirement, are preconfigured on PE1.
  5. After receiving the BGP route, PE1 matches the color extended community attribute of the route against the color value in the ODN template. If the matching succeeds, PE1 sends a path computation request to the PCE. In this case, the headend, color value, and endpoint of the SR-MPLS TE Policy are PE1, 30, and PE3, respectively.
  6. According to the collected network topology information and the constraints defined by the ODN template, the PCE computes a candidate path for the SR-MPLS TE Policy and delivers the path to the headend PE1 through PCEP. In this case, SR-ERO subobjects are used to carry path information about the SR-MPLS TE Policy.
  7. Functioning as a PCC, PE1 receives the candidate path information delivered by the PCE and then installs the path. The segment list of the candidate path is <P1, P2, PE4, PE3>, excluding the link with the affinity attribute of red. After completing path installation, the PCC sends a PCRpt message to the PCE to report the SR-MPLS TE Policy status.

If the PCC has delegated the control permission to the PCE, the PCE recomputes a path when network information (e.g., topology information) or ODN template information changes. The PCE sends a PCUpd message to deliver information about the recomputed path to the PCC and uses the PLSP-ID reported by the PCC as an identifier. After receiving the PCUpd message delivered by the PCE, the PCC (headend) updates the path.

Traffic Steering into an SR-MPLS TE Policy

Route Coloring

Route coloring is to add the Color Extended Community to a route through a route-policy, enabling the route to recurse to an SR-MPLS TE Policy based on the color value and next-hop address in the route.

The route coloring process is as follows:
  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Color-based Traffic Steering
Color-based traffic steering is to steer traffic into an SR-MPLS TE Policy by recursing the route of the traffic to the SR-MPLS TE Policy based on the color extended community attribute and next hop of the route. Figure 1-2608 shows the traffic steering process.
Figure 1-2608 Color-based traffic steering
The color-based traffic steering process is as follows:
  1. The controller delivers an SR-MPLS TE Policy to headend device A. The SR-MPLS TE Policy's color value is 123 and its endpoint value is the address 10.1.1.3 of device B.
  2. A BGP or VPN export policy is configured on device B, or a BGP or VPN import policy is configured on device A, with the color value being set to 123 for route prefix 10.1.1.0/24 and the next-hop address in the route being set to address 10.1.1.3 of device B. Then, the route is delivered to device A through the BGP peer relationship.
  3. A tunnel policy is configured on device A. After receiving BGP route 10.1.1.0/24, device A recurses the route to the SR-MPLS TE Policy based on color value 123 and next-hop address 10.1.1.3. During forwarding, the label stack <C, E, G, B> is added to the packets destined for 10.1.1.0/24.
DSCP-based Traffic Steering

EVPN VPWS and EVPN VPLS packets do not support DSCP-based traffic steering because they do not carry DSCP values.

DSCP-based traffic steering does not support color-based route recursion. Instead, it recurses a route into an SR-MPLS TE Policy based on the next-hop address in the route. Specifically, it searches for the SR-MPLS TE Policy group matching specific endpoint information and then finds the corresponding SR-MPLS TE Policy based on the DSCP value of packets. Figure 1-2609 shows the DSCP-based traffic steering process.

Figure 1-2609 DSCP-based traffic steering
The DSCP-based traffic steering process is as follows:
  1. The controller sends two SR Policies to headend device A. SR-MPLS TE Policy 1's color value is 123 and its endpoint value is address 10.1.1.3 of device B. SR-MPLS TE Policy 2's color value is 124 and its endpoint value is also address 10.1.1.3 of device B.
  2. Device B delivers BGP route 10.1.1.0/24 to headend device A through the BGP peer relationship.
  3. Tunnel policy configuration is performed on the head device A. Specifically, an SR-MPLS TE Policy group is configured, with its endpoint being address 10.1.1.3 of device B. Color value 123 is mapped to DSCP value 10, and color value 124 is mapped to DSCP value 20. Then, a tunnel policy is configured on device A to bind the SR-MPLS TE Policy group and the next-hop address in the route.
  4. Device A implements tunnel recursion based on the destination address of packets and finds the SR-MPLS TE Policy group bound to the destination address. After the color value matching the DSCP value of the packets and the specific SR-MPLS TE Policy matching the color value are found, the packets can be steered into the SR-MPLS TE Policy.

SR-MPLS TE Policy-based Data Forwarding

Intra-AS Data Forwarding
Intra-AS data forwarding depends on the establishment of intra-AS SR-MPLS TE Policy forwarding entries. Figure 1-2610 shows the process of establishing intra-AS SR-MPLS TE Policy forwarding entries.
Figure 1-2610 Establishment of intra-AS SR-MPLS TE Policy forwarding entries
The process of establishing intra-AS SR-MPLS TE Policy forwarding entries is as follows:
  1. The controller delivers an SR-MPLS TE Policy to headend device A.
  2. Endpoint device B advertises BGP route 10.1.1.0/24 to device A, and the next-hop address in the BGP route is address 10.1.1.3/32 of device B.
  3. A tunnel policy is configured on ingress A. After receiving the BGP route, device A recurses the route to the SR-MPLS TE Policy based on the color and next-hop address in the route. The label stack of the SR-MPLS TE Policy is <20003, 20005, 20007, 20002>.
Figure 1-2611 shows the intra-AS data forwarding process. Headend device A finds the SR-MPLS TE Policy and pushes label stack <20003, 20005, 20007, 20002> into the packet destined for 10.1.1.0/24. After receiving an SR-MPLS TE packet, devices C, E, and G all pop the outer label in the SR-MPLS TE packet. Finally, device G forwards the received packet to device B. After receiving the packet, device B further processes it based on the packet destination address.
Figure 1-2611 Intra-AS data forwarding
Inter-AS Data Forwarding

Inter-AS data forwarding depends on the establishment of inter-AS SR-MPLS TE Policy forwarding entries. Figure 1-2612 shows the process of establishing inter-AS SR-MPLS TE Policy forwarding entries.

Figure 1-2612 Establishment of inter-AS SR-MPLS TE Policy forwarding entries
The process of establishing inter-AS SR-MPLS TE Policy forwarding entries is as follows:
  1. The controller delivers an intra-AS SR-MPLS TE Policy to headend device ASBR3 in AS 200. The SR-MPLS TE Policy's color value is 123, endpoint is IP address 10.1.1.3 of PE1, and binding SID is 30028.
  2. The controller delivers an inter-AS E2E SR-MPLS TE Policy to headend device CSG1 in AS 100. The SR-MPLS TE Policy's color value is 123, and endpoint is address 10.1.1.3 of PE1. The segment list combines the intra-AS label of AS 100, the inter-AS BGP Peer-Adj label, and the binding SID of AS 200, forming <102, 203, 3040, 30028>.
  3. A BGP or VPN export policy is configured on PE1, with the color value being set to 123 for route prefix 10.1.1.0/24 and the next-hop address in the route being set to address 10.1.1.3 of PE1. Then, the route is advertised to CSG1 through the BGP peer relationship.
  4. A tunnel policy is configured on headend device CSG1. After receiving BGP route 10.1.1.0/24, CSG1 recurses the route to the E2E SR-MPLS TE Policy based on the color value and next-hop address in the route. The label stack of the SR-MPLS TE Policy is <102, 203, 3040, 30028>.

Figure 1-2613 shows the inter-AS data forwarding process.

Figure 1-2613 Inter-AS SR-MPLS TE Policy data forwarding

The inter-AS SR-MPLS TE Policy data forwarding process is as follows:

  1. Headend device CSG1 adds label stack <102, 203, 3040, 30028> to the data packet. Then, the device searches for the outbound interface according to label 102 of the CSG1->AGG1 adjacency and removes the label. The packet carrying label stack <203, 3040, 30028> is forwarded to downstream device AGG1 through the CSG1->AGG1 adjacency.

  2. After receiving the packet, AGG1 searches for the adjacency matching top label 203 in the label stack, finds that the corresponding outbound interface is the AGG1->ASBR1 adjacency, and removes the label. The packet carrying label stack <3040, 30028> is forwarded to downstream device ASBR1 through the AGG1->ASBR1 adjacency.

  3. After receiving the packet, ASBR1 searches for the adjacency matching top label 3040 in the label stack, finds that the corresponding outbound interface is the ASBR1->ASBR3 adjacency, and removes the label. The packet carrying label stack <30028> is forwarded to downstream device ASBR3 through the ASBR1->ASBR3 adjacency.

  4. After receiving the packet, ASBR3 searches the forwarding table based on top label 30028 in the label stack. 30028 is a binding SID that corresponds to label stack <405, 506>. Then, the device searches for the outbound interface based on label 405 of the ASBR3->P1 adjacency and removes the label. The packet carrying label stack <506> is forwarded to downstream device P1 through the ASBR3->P1 adjacency.

  5. After receiving the packet, P1 searches for the adjacency matching top label 506 in the label stack, finds that the corresponding outbound interface is the P1->PE1 adjacency, and removes the label. In this case, the packet does not carry any label and is forwarded to destination device PE1 through the P1->PE1 adjacency. After receiving the packet, PE1 further processes it based on the packet destination address.

SBFD for SR-MPLS TE Policy

Unlike RSVP-TE, which exchanges Hello messages between forwarders to maintain tunnel status, an SR-MPLS TE Policy cannot maintain its status in the same way. An SR-MPLS TE Policy is established immediately after the headend delivers a label stack. The SR-MPLS TE Policy remains up only unless the label stack is revoked. Therefore, seamless bidirectional forwarding detection (SBFD) for SR-MPLS TE Policy is introduced for SR-MPLS TE Policy fault detection. SBFD for SR-MPLS TE Policy is an end-to-end fast detection mechanism that quickly detects faults on the link through which an SR-MPLS TE Policy passes.

Figure 1-2614 shows the SBFD for SR-MPLS TE Policy detection process.
Figure 1-2614 SBFD for SR-MPLS TE Policy
The SBFD for SR-MPLS TE Policy detection process is as follows:
  1. After SBFD for SR-MPLS TE Policy is enabled on the headend, the endpoint uses the endpoint address (IPv4 address only) as the remote discriminator of the SBFD session corresponding to the segment list in the SR-MPLS TE Policy by default. If multiple segment lists exist in the SR-MPLS TE Policy, the remote discriminators of the corresponding SBFD sessions are the same.
  2. The headend sends an SBFD packet encapsulated with a label stack corresponding to the SR-MPLS TE Policy.
  3. After the endpoint device receives the SBFD packet, it returns a reply through the shortest IP link.
  4. If the headend receives the reply, it considers that the corresponding segment list in the SR-MPLS TE Policy is normal. Otherwise, it considers that the segment list is faulty. If all the segment lists referenced by a candidate path are faulty, SBFD triggers a candidate path switchover.

SBFD return packets are forwarded over IP. If the primary paths of multiple SR-MPLS TE Policies between two nodes differ due to different path constraints but SBFD return packets are transmitted over the same path, a fault in the return path may cause all involved SBFD sessions to go down. As a result, all the SR-MPLS TE Policies between the two nodes go down. The SBFD sessions of multiple segment lists in the same SR-MPLS TE Policy also have this problem.

By default, if HSB protection is not enabled for an SR-MPLS TE Policy, SBFD detects all the segment lists only in the candidate path with the highest preference in the SR-MPLS TE Policy. With HSB protection enabled, SBFD can detect all the segment lists of candidate paths with the highest and second highest priorities in the SR-MPLS TE Policy. If all the segment lists of the candidate path with the highest preference are faulty, a switchover to the HSB path is triggered.

SR-MPLS TE Policy Failover

SBFD for SR-MPLS TE Policy checks the connectivity of segment lists. If a segment list is faulty, an SR-MPLS TE Policy failover is triggered.

On the networks shown in Figure 1-2615 and Figure 1-2616, the headend and endpoint of SR-MPLS TE Policy 1 are PE1 and PE2, respectively. In addition, the headend and endpoint of SR-MPLS TE Policy 2 are PE1 and PE3, respectively. VPN FRR can be deployed for SR-MPLS TE Policy 1 and SR-MPLS TE Policy 2. Primary and HSB paths are established for SR-MPLS TE Policy 1. Segment list 1 contains node labels to P1, P2, and PE2. It can directly use all protection technologies of SR-MPLS, for example, TI-LFA.
Figure 1-2615 SR-MPLS TE Policy Failover
Figure 1-2616 SR-MPLS TE Policy label stack
In Figure 1-2615:
  • If point 1, 3, or 5 is faulty, TI-LFA local protection takes effect only on PE1, P1, or P2. If the SBFD session corresponding to segment list 1 on PE1 detects a fault before traffic is restored through local protection, SBFD automatically sets segment list 1 to down and instructs SR-MPLS TE Policy 1 to switch to segment list 2.
  • If point 2 or 4 is faulty, no local protection is available. SBFD detects node faults, sets segment list 1 to down, and instructs SR-MPLS TE Policy 1 to switch to segment list 2.
  • If PE2 is faulty and all the candidate paths of SR-MPLS TE Policy 1 are unavailable, SBFD can detect the fault, set SR-MPLS TE Policy 1 to down, and trigger a VPN FRR switchover to switch traffic to SR-MPLS TE Policy 2.

SR-MPLS TE Policy OAM

SR-MPLS TE Policy operations, administration and maintenance (OAM) is used to monitor SR-MPLS TE Policy connectivity and quickly detect SR-MPLS TE Policy faults. Currently, SR-MPLS TE Policy OAM is implemented through ping and tracert tests.

SR-MPLS TE Policies implement policy control on paths, whereas paths are real paths with SID stacks. There are two levels of SR-MPLS TE Policy detection.
  1. SR-MPLS TE Policy-level detection: You can configure multiple candidate paths for an SR-MPLS TE Policy. The valid path with the highest preference is the primary path, and the valid path with the second highest preference is the backup path. Multiple segment lists of a candidate path work in load balancing mode. The same candidate path can be configured for different SR Policies. SR-MPLS TE Policy-level detection is to detect the primary and backup paths in an SR-MPLS TE Policy.
  2. Candidate path-level detection: This detection is basic and does not involve upper-layer policy services. It only detects whether a candidate path is normal.

Policy-level detection is equivalent to candidate path-level detection if the preferred primary and backup paths are found. If the primary and backup paths have multiple segment lists, the ping and tracert tests check all the segment lists through the same process and generate test results.

SR-MPLS TE Policy Ping

On the network shown in Figure 1-2617, SR is enabled on the PE and P devices on the public network. An SR-MPLS TE Policy is established from PE1 to PE2. The SR-MPLS TE Policy has only one primary path, and its segment list is PE1->P1->P2->PE2.

Figure 1-2617 SR-MPLS TE Policy Ping/Tracert

The process of initiating a ping test on the SR-MPLS TE Policy from PE1 is as follows:

  1. PE1 constructs an MPLS Echo Request packet. In the IP header of the packet, the destination IP address is 127.0.0.0/8, and the source IP address is the MPLS LSR ID of PE1. MPLS labels are encapsulated as a SID label stack in segment list form.

    Note that, if an adjacency SID is configured, headend device PE1 only detects the FEC of P1. As a result, after the ping packet reaches PE2, the FEC of PE2 fails to be verified. Therefore, to ensure that the FEC verification on the endpoint device succeeds in this scenario, the MPLS Echo Request packet must be encapsulated with nil_FEC.

  2. PE1 forwards the packet to PE2 hop by hop based on the segment list label stack of the SR-MPLS TE Policy.

  3. P2 removes the outer label from the received packet and forwards the packet to PE2. In this case, all labels of the packet are removed.

  4. PE2 sends the packet to the host transceiver for processing, constructs an MPLS Echo Reply packet, and sends the packet to PE1. The destination address of the packet is the MPLS LSR ID of PE1. Because no SR-MPLS TE Policy is bound to the destination address of the reply packet, IP forwarding is implemented for the packet.

  5. After receiving the reply packet, PE1 generates ping test results.

If there are primary and backup paths with multiple segment lists, the ping test checks all the segment lists.

SR-MPLS TE Policy Tracert

On the network in Figure 1-2617, the process of initiating a tracert test on the SR-MPLS TE Policy from PE1 is as follows:

  1. PE1 constructs an MPLS Echo Request packet. In the IP header of the packet, the destination IP address is 127.0.0.0/8, and the source IP address is an MPLS LSR ID. MPLS labels are encapsulated as a SID label stack in segment list form.

  2. PE1 forwards the packet to P1. After receiving the packet, P1 determines whether the outer TTL minus one is zero.

    • If the outer TTL minus one is zero, an MPLS TTL timeout occurs and P1 sends the packet to the host transceiver for processing.

    • If the outer TTL minus one is greater than zero, P1 removes the outer MPLS label, buffers the value (outer TTL minus one), copies and pastes the value to the new outer MPLS label, searches the forwarding table for the outbound interface, and forwards the packet to P2.

  3. Similar to P1, P2 also determines whether the outer TTL minus one is zero.

    • If the outer TTL minus one is zero, an MPLS TTL timeout occurs and P2 sends the packet to the host transceiver for processing.

    • If the outer TTL minus one is greater than zero, P2 removes the outer MPLS label, buffers the value (outer TTL minus one), copies and pastes the value to the new outer MPLS label, searches the forwarding table for the outbound interface, and forwards the packet to PE2.

  4. After receiving the packet, PE2 removes the outer MPLS label and sends the packet to the host transceiver for processing. In addition, PE2 returns an MPLS Echo Reply packet to PE1, with the destination address of the packet being the MPLS LSR ID of PE1. Because no SR-MPLS TE Policy is bound to the destination address of the reply packet, IP forwarding is implemented for the packet.

  5. After receiving the reply packet, PE1 generates tracert test results.

If there are primary and backup paths with multiple segment lists, the tracert test checks all the segment lists.

TI-LFA FRR

Topology-independent loop-free alternate (TI-LFA) fast reroute (FRR) provides link and node protection for Segment Routing (SR) tunnels. If a link or node fails, it enables traffic to be rapidly switched to a backup path, minimizing traffic loss.

SR-MPLS TI-LFA FRR applies to both SR-MPLS BE and loose SR-MPLS TE scenarios.

Related Concepts
Table 1-1068 TI-LFA FRR-related concepts

Concept

Definition

P space

The P space is a set of nodes reachable from the source node of a protected link using the shortest path tree (SPT) rooted at the source node without traversing the protected link.

Extended P space

The extended P space is a set of nodes reachable from all the neighbors of a protected link's source node using SPTs rooted at the neighbors without traversing the protected link. A node in the P space or extended P space is called a P node.

Q space

The Q space is a set of nodes reachable from the destination node of a protected link using the reverse SPT rooted at the destination node without traversing the protected link. A node in the Q space is called a Q node.

PQ node

A PQ node resides in both the (extended) P space and Q space and functions as the destination node of a protected tunnel.

LFA

With LFA, a device runs the SPF algorithm to compute the shortest paths to the destination, using each neighbor that can provide a backup link as a root node. The device then computes a group of loop-free backup links with the minimum cost. For more information about LFA, see IS-IS Auto FRR.

RLFA

Remote LFA (RLFA) computes a PQ node based on a protected link and establishes a tunnel between the source and PQ nodes to offer alternate next hop protection. If the protected link fails, traffic is automatically switched to the backup path, improving network reliability. For more information about RLFA, see IS-IS Auto FRR.

NOTE:

When computing an RLFA FRR backup path, Huawei devices compute the extended P space by default.

TI-LFA

In some LFA FRR and RLFA scenarios, the extended P space and Q space neither intersect nor have direct neighbors. Consequently, no backup path can be computed, failing to meet reliability requirements. TI-LFA solves this problem by computing the extended P space, Q space, and post-convergence SPT based on the protected path, computing a scenario-specific repair list, and establishing an SR tunnel from the source node to a P node and then to a Q node to offer alternate next hop protection. If the protected link fails, traffic is automatically switched to the backup path, improving network reliability.
NOTE:

When computing a TI-LFA FRR backup path, Huawei devices compute the extended P space by default.

Background

Conventional loop-free alternate (LFA) requires that at least one neighbor be a loop-free next hop to a destination, and RLFA requires that at least one node be connected to the source and destination nodes without traversing the failure point. In contrast, Topology-Independent Loop-Free Alternate (TI-LFA) uses an explicit path to represent a backup path, achieving higher FRR reliability without imposing topology constraints.

Assume that packets need to be forwarded from DeviceA to DeviceF on the network shown in Figure 1-2618. If the extended P space and Q space do not intersect, RLFA requirements are not met. Consequently, no backup path (also called a remote LDP LSP) can be computed using RLFA. If the link between DeviceB and DeviceE fails, DeviceB forwards the packets to DeviceC. However, DeviceC is not a Q node and therefore cannot directly forward the packets to the destination address. In this situation, DeviceC must recompute a path. Because the cost of the link between DeviceC and DeviceD is 100, DeviceC considers that the optimal path to DeviceF passes through DeviceB. Consequently, DeviceC forwards the packets back to DeviceB, resulting in a loop and forwarding failure.
Figure 1-2618 RLFA networking
TI-LFA can be used to solve the preceding problem. On the network shown in Figure 1-2619, if the link between DeviceB and DeviceE fails, DeviceB adds new path information (node SID of DeviceC and adjacency SID for the C-D adjacency) to the packets based on TI-LFA FRR backup entries. This ensures that the packets can be forwarded along the backup path.
Figure 1-2619 TI-LFA networking
TI-LFA FRR Principles

On the network shown in Figure 1-2620, PE1 and PE3 are source and destination nodes, respectively, and link costs are configured. This example assumes that P1 fails.

TI-LFA provides both link and node protection.

  • Link protection: protects traffic traversing a specific link.

  • Node protection: protects traffic traversing a specific node.

    During protection path computation, node protection takes precedence over link protection. If a node protection path has been computed, no link protection path will be computed. However, if no node protection path has been computed, a link protection path will be computed.

Figure 1-2620 Typical TI-LFA networking

The following uses node protection as an example to describe how to implement TI-LFA. Assume that traffic travels along the PE1 -> P1 -> P3 -> PE3 path on the network shown in Figure 1-2620. To prevent traffic loss caused by P1 failures, TI-LFA computes the extended P space, Q space, post-convergence SPT to be used after P1 fails, backup outbound interface, and repair list, and then generates backup forwarding entries.

The TI-LFA FRR computation process is as follows:
  1. Computes the extended P space {PE2, P2} with PE1's neighbor PE2 as the root node.

  2. Computes the Q space {P3, P4, PE3, PE4} with P3 as the root node.

    In this example, the end node on the traffic forwarding path is PE3, which has only one egress (P3). As such, to simplify calculation, P3 can be used as the root.

  3. Computes the post-convergence SPT by excluding the primary next hop.

  4. Computes a backup outbound interface and a repair list.
    • Backup outbound interface: If the extended P space and Q space neither intersect nor have direct neighbors, the post-convergence next-hop outbound interface functions as the backup outbound interface.

    • Repair list: a constrained path through which traffic can be directed to a Q node. The repair list consists of a P node label (SID) and a P-to-Q adjacency label. In Figure 1-2620, the repair list consists of P2's node label 100 and the P2-to-P4 adjacency label 9204.

During backup path computation, a SID advertised by the repair node needs to be selected according to the following rules so that a forwarding label can be generated:
  • The node SID advertised by the repair node is preferentially selected. If the repair node does not advertise a node SID, a prefix SID advertised by the repair node is preferentially selected. The smaller the prefix SID, the higher the priority.
  • A node that does not support SR or does not advertise any prefix or node SID cannot function as the repair node.
TI-LFA FRR Forwarding

After a TI-LFA backup path is computed, if the primary path fails, traffic is switched to the backup path, preventing packet loss.

Assume that a packet carrying the prefix SID 16100 of PE3 needs to be forwarded from PE1 to PE3 over the PE1 -> P1 -> P3 -> PE3 path shown in Figure 1-2621. When the primary next hop P1 fails, TI-LFA FRR is triggered, switching traffic to the backup path.

Figure 1-2621 Forwarding over a TI-LFA FRR backup path

The process of forwarding a packet over a TI-LFA FRR backup path is as follows:

  1. PE1 encapsulates a label stack into the packet based on the repair list, using P2's node label 100 as the outer label and the P2-to-P4 adjacency label 9204 as the inner label.
  2. After receiving the packet, PE2 searches its label forwarding table according to the outer label 100 and then forwards the packet to P2.
  3. After receiving the packet, P2 searches its label forwarding table according to the outer label. Because P2 is the egress node, it removes the label 100 to expose the label 9204, which is an adjacency SID allocated by P2. As such, P2 finds the corresponding outbound interface according to the label 9204, removes the label, and then forwards the remaining packet content to P4.
  4. After receiving the packet, P4 searches its label forwarding table according to the outer label 16100 and then forwards the packet to P3.
  5. After receiving the packet, P3 searches its label forwarding table according to the outer label 16100 and then forwards the packet to PE3.
  6. After receiving the packet, PE3 searches its label forwarding table according to the outer label and determines that the label 16100 is a local label. Therefore, PE3 removes the label. As 16100 is the bottommost label, PE3 performs further packet processing according to IP packet header information.
TI-LFA FRR Usage Scenarios
Table 1-1069 TI-LFA FRR usage scenarios

TI-LFA FRR Protection

Description

Deployment

TI-LFA FRR protects traffic transmitted in IP forwarding mode.

Traffic is transmitted over the primary path in IP forwarding mode, and a TI-LFA FRR backup path is computed.

  1. Establish an IS-IS neighbor relationship between each pair of directly connected nodes, deploy SR networkwide, and configure a prefix SID for the specified P node.
  2. Enable TI-LFA FRR on the source node.

TI-LFA FRR protects traffic transmitted over an SR tunnel.

Traffic is transmitted over a primary SR tunnel, and a TI-LFA FRR backup path is computed.

  1. Establish an IS-IS neighbor relationship between each pair of directly connected nodes, deploy SR networkwide, and configure prefix SIDs on P and destination nodes.
  2. Enable TI-LFA FRR on the source node.
Advantages of TI-LFA FRR
SR-based TI-LFA FRR has the following advantages:
  1. Meets basic requirements for IP FRR fast convergence.
  2. Theoretically supports all scenarios.
  3. Uses an algorithm of moderate complexity.
  4. Directly uses a post-convergence path as the backup forwarding path, involving no intermediate convergence state like other FRR technologies.

TI-LFA in an SR-MPLS Scenario Where Multiple Nodes Advertise the Same Route

This section uses the network shown in Figure 1-2622 as an example to describe how TI-LFA is implemented in a scenario where multiple nodes advertise the same route.

On the network shown in Figure 1-2622, DeviceC and DeviceF function as boundary devices to forward routes between IS-IS process 1 and IS-IS process 2. DeviceG advertises an intra-area route, and DeviceC and DeviceF import the route and re-advertise it so that it can be flooded to IS-IS process 1. If TI-LFA is enabled on DeviceA, DeviceA cannot compute the backup next hop of the route 10.10.10.10/32. This is because TI-LFA conditions in the scenario where each route is advertised by only one node are not met.

To address this issue, a virtual node is constructed between DeviceC and DeviceF. In so doing, the original scenario where multiple nodes advertise the same route is converted into one where the route is advertised by only one node. The backup next hop to the virtual node is then computed through TI-LFA, and the route advertised by multiple nodes inherits the backup next hop from the virtual node.

Figure 1-2622 TI-LFA in an SR-MPLS scenario where multiple nodes advertise the same route

For example, both DeviceC and DeviceF advertise a route with the prefix of 10.10.10.10/32, and TI-LFA is enabled on DeviceA. Because 10.10.10.10/32 is advertised by two nodes (DeviceC and DeviceF), DeviceA cannot compute the backup next hop of the route 10.10.10.10/32.

Based on the two nodes advertising 10.10.10.10/32, you can create a virtual node that connects to these two nodes over virtual links. The virtual node advertises a route whose prefix is 10.10.10.10/32 and whose cost is the smaller one between the costs of the routes advertised by DeviceC and DeviceF. For example, if the costs of the routes advertised by DeviceC and DeviceF are 5 and 10, respectively, the cost of the route advertised by the virtual node is 5. In this case, the cost of the link from DeviceC to the virtual node is 0, the cost of the link from DeviceF to the virtual node is 5, and the costs of the links from the virtual node to DeviceC and DeviceF are both the maximum value (Max-cost). On DeviceA, configure DeviceC and DeviceF as invalid sources for the route 10.10.10.10/32 so that only the virtual node is considered as the route advertisement node. The backup next hop to the virtual node is then computed through the TI-LFA algorithm.

OSPF SR-MPLS BE over GRE to Achieve TI-LFA

Definition

The Generic Routing Encapsulation (GRE) protocol encapsulates the data packets of certain network layer protocols such as IPv4 and IPv6, enabling these packets to be transmitted over an IPv4 network.

Topology-independent loop-free alternate (TI-LFA) fast reroute (FRR) provides link and node protection for SR tunnels. If a link or node fails, this function enables traffic to be rapidly switched to a backup path, minimizing traffic loss.

Purpose

On an AGG open ring, no intermediate link exists between two AGGs. In this case, a GRE tunnel (IPv4) can be created between AGGs to support TI-LFA protection.

Networking Description

In Area N on the network shown in Figure 1-2623, AGG1 and AGG2 form part of an open ring, with no intermediate link deployed between the two. If the primary path is Core 2→AGG2→ACC3→ACC2 and the link between ACC3 and AGG2 fails, the existing protection scheme fails to provide protection. As a result, traffic is interrupted for a long time.

Figure 1-2623 SR-MPLS BE failure

To resolve the preceding issue, deploy a GRE tunnel between AGG2 and AGG1 and configure it to reside in Area N, as shown in Figure 1-2624. In this way, a closed ring is formed in Area N, thereby supporting fault protection. To avoid protection failures in this scenario, enable the physical link of the GRE tunnel in Area 0.

Figure 1-2624 SR-MPLS BE over GRE to achieve TI-LFA

The forwarding process is as follows:

  1. AGG2 computes that the primary path is AGG2→ACC3→ACC2 and the backup path is AGG2→AGG1→ACC1→ACC2. The outbound interface of the backup path is a GRE tunnel interface. The TI-LFA-computed P and Q nodes are AGG1 and ACC1, respectively. In addition, the adjacency label between the P and Q nodes is 48061.
  2. When AGG2 detects a link failure between ACC3 and itself, it quickly switches traffic to the backup path for forwarding. After receiving a data packet carrying the label 19002, AGG2 queries the label forwarding table of the backup path and finds that the outbound interface is a GRE tunnel interface, the label stack of the packet contains only the label 48061, and the incoming label to be carried by the packet from the Q node to ACC2 is 16002. In this case, AGG2 removes the label 19002 from the packet and adds the labels 16002 and 48061 in sequence to the packet. Finally, it adds a GRE header to the packet and sends the packet through the GRE tunnel interface.
  3. After receiving the packet, AGG1 removes the GRE header, searches the label forwarding table according to the label 48061, removes the adjacency label 48061, and then sends the packet through the outbound interface corresponding to AGG1→ACC1.
  4. After receiving the packet, ACC1 searches the label forwarding table, swaps the label 16002 with the label 17002, and then sends the packet through the outbound interface corresponding to ACC1→ACC2.
  5. After receiving the packet, ACC2 — the destination of the packet — searches the label forwarding table and removes the label 17002.

IS-IS SR-MPLS BE over GRE to Achieve TI-LFA

Definition

The Generic Routing Encapsulation (GRE) protocol encapsulates the data packets of certain network layer protocols such as IPv4 and IPv6, enabling these packets to be transmitted over an IPv4 network.

Topology-independent loop-free alternate (TI-LFA) fast reroute (FRR) provides link and node protection for SR tunnels. If a link or node fails, this function enables traffic to be rapidly switched to a backup path, minimizing traffic loss.

Purpose

On an AGG open ring, no intermediate link exists between two AGGs. In this case, a GRE tunnel (IPv4) can be created between AGGs to support TI-LFA protection.

Networking Description

In Area N on the network shown in Figure 1-2625, AGG1 and AGG2 form part of an open ring, with no intermediate link deployed between the two. If the primary path is Core 2→AGG2→ACC3→ACC2 and the link between ACC3 and AGG2 fails, the existing protection scheme fails to provide protection. As a result, traffic is interrupted for a long time.

Figure 1-2625 SR-MPLS BE failure

To resolve the preceding issue, deploy a GRE tunnel between AGG2 and AGG1 and configure it to reside in Area N, as shown in Figure 1-2626. In this way, a closed ring is formed in Area N, thereby supporting fault protection. To avoid protection failures in this scenario, enable the physical link of the GRE tunnel in Area 0.

Figure 1-2626 SR-MPLS BE over GRE to achieve TI-LFA

The forwarding process is as follows:

  1. AGG2 computes that the primary path is AGG2→ACC3→ACC2 and the backup path is AGG2→AGG1→ACC1→ACC2. The outbound interface of the backup path is a GRE tunnel interface. The TI-LFA-computed P and Q nodes are AGG1 and ACC1, respectively. In addition, the adjacency label between the P and Q nodes is 48061.
  2. When AGG2 detects a link failure between ACC3 and itself, it quickly switches traffic to the backup path for forwarding. After receiving a data packet carrying the label 19002, AGG2 queries the label forwarding table of the backup path and finds that the outbound interface is a GRE tunnel interface, the label stack of the packet contains only the label 48061, and the incoming label to be carried by the packet from the Q node to ACC2 is 16002. In this case, AGG2 removes the label 19002 from the packet and adds the labels 16002 and 48061 in sequence to the packet. Finally, it adds a GRE header to the packet and sends the packet through the GRE tunnel interface.
  3. After receiving the packet, AGG1 removes the GRE header, searches the label forwarding table according to the label 48061, removes the adjacency label 48061, and then sends the packet through the outbound interface corresponding to AGG1→ACC1.
  4. After receiving the packet, ACC1 searches the label forwarding table, swaps the label 16002 with the label 17002, and then sends the packet through the outbound interface corresponding to ACC1→ACC2.
  5. After receiving the packet, ACC2 — the destination of the packet — searches the label forwarding table and removes the label 17002.

Anycast FRR

Anycast SID

The same SID advertised by all routers in a group is called an anycast SID. On the network shown in Figure 1-2627, DeviceD and DeviceE both reside on the egress of an SR area, in which all devices can reach the non-SR area through either DeviceD or DeviceE. Therefore, the two devices can back up each other. In this situation, DeviceD and DeviceE can be configured in the same group and advertise the same prefix SID, the so called anycast SID.

The next hop of the anycast SID is DeviceD, which has the smallest IGP cost in the router group. DeviceD is called the optimal node that advertises the anycast SID, and the other device in the group is a backup node. If the primary next-hop link or direct neighbor of DeviceD fails, traffic can reach an anycast SID device through a protection path. The anycast SID device can be the source that has the same primary next hop or another anycast source. When VPN traffic is forwarded over an SR LSP, the same VPN label must be configured for anycast.

Figure 1-2627 Anycast SID
Anycast FRR

Anycast FRR allows multiple nodes to advertise the same prefix SID. Common FRR algorithms can only use the SPT to compute a backup next hop. This applies to scenarios where a route is advertised by a single node instead of multiple nodes.

When a route is advertised by multiple nodes, these nodes must be converted to a single node before a backup next hop is computed for a prefix SID. Anycast FRR constructs a virtual node to represent the multiple nodes that advertise the same route and uses the TI-LFA algorithm to compute a backup next hop to the virtual node. The anycast prefix SID inherits the backup next hop from the virtual node. This solution does not involve any modification of the algorithm for computing the backup next hop. It retains the loop-free trait so that no loop occurs between the computed backup next hop and the primary next hop of the pre-convergence peripheral node.

Figure 1-2628 IGP FRR networking in a scenario where multiple nodes advertise the same route

As shown in Figure 1-2628(a), the cost of the link from DeviceA to DeviceB is 5, and that of the link from DeviceA to DeviceC is 10. DeviceB and DeviceC advertise the same route 10.1.1.0/24. TI-LFA FRR is enabled on DeviceA. Because DeviceA does not meet TI-LFA requirements, it cannot compute a backup next hop for the route 10.1.1.0/24. To address this issue, TI-LFA FRR with multiple nodes advertising the same route can be used.

As shown in Figure 1-2628(b), this function is implemented as follows:

  1. A virtual node is constructed between DeviceB and DeviceC and connected to both devices over links. The cost values of the links from DeviceB and DeviceC to the virtual node are both 0, whereas those of the links from the virtual node to DeviceB and DeviceC are both infinite.
  2. The virtual node advertises a prefix of 10.1.1.0/24. This means that the route is advertised by a single node.
  3. DeviceA uses the TI-LFA algorithm to compute a backup next hop to the virtual node. The route 10.1.1.0/24 inherits the computation result. On the network shown in Figure 1-2628(b), DeviceA computes two links to the virtual node. The primary link is from DeviceA to DeviceB, and the backup link is from DeviceA to DeviceC.

SR-MPLS Microloop Avoidance

Due to the distributed nature of IGP link state databases (LSDBs), devices may converge at different times after a failure occurs, presenting the risk of microloops. This may in turn result in microloops, a kind of transient loop that disappears after all the nodes on the forwarding path have converged. Microloops result in a series of issues, including packet loss, jitter, and out-of-order packets, and therefore must be avoided.

SR provides a method to help avoid potential loops while minimizing impacts on the network. Specifically, if a network topology change may cause a loop, SR first allows network nodes to insert loop-free segment lists to steer traffic to the destination address. Then normal traffic forwarding is restored only after all the involved network nodes converge. This can effectively avoid loops on the network.

SR-MPLS Local Microloop Avoidance in a Traffic Switchover Scenario

In a traffic switchover scenario, a local microloop may be formed when a node adjacent to the failed node converges earlier than the other nodes on the network. On the network shown in Figure 1-2629, TI-LFA is deployed on all nodes. If node B fails, node A undergoes the following process to perform convergence for the route to node C:

  1. Node A detects the failure and enters the TI-LFA FRR process, during which node A inserts the repair list <105, 16056> into the packet to direct the packet to the TI-LFA-computed P node (node E). So the packet is forwarded to the next-hop node D first and the SIDs encapsulated into the packet are <105, 16056, 103>.
  2. After performing route convergence, node A searches for the route to node C and forwards the packet to the next-hop node D through the route. In this case, the packet does not carry any repair list and is directly forwarded based on the node SID 103.
  3. If node D has not yet converged when receiving the packet, node A is considered as the next hop of the route from node D to node C. As a result, node D forwards the packet back to node A, which in turn causes a microloop between the two nodes.
Figure 1-2629 SR-MPLS local microloop in a traffic switchover scenario

According to the preceding convergence process, the microloop occurs when node A converges, quits the TI-LFA FRR process, and then implements normal forwarding before other nodes on the network converge. The issue is that node A converges earlier than the other nodes, so by postponing its convergence, the microloop can be avoided. As TI-LFA backup paths are loop-free, the packet can be forwarded along a TI-LFA backup path for a period of time. Node A can then wait for the other nodes to complete convergence before quitting the TI-LFA FRR process and performing convergence, thereby avoiding the microloop.

Figure 1-2630 SR-MPLS local microloop avoidance in a traffic switchover scenario

After microloop avoidance is deployed on the network shown in Figure 1-2630, the convergence process is as follows:

  1. After node A detects the failure, it enters the TI-LFA FRR process, encapsulating the repair list <105, 16056> into the packet and forwarding the packet along the TI-LFA backup path, with node D as the next hop.
  2. Node A starts the timer T1. During T1, node A does not respond to topology changes, the forwarding table remains unchanged, and the TI-LFA backup path continues to be used for packet forwarding. Other nodes on the network converge properly.
  3. When T1 expires, other nodes on the network have already converged. Node A can now converge and quit the TI-LFA FRR process to forward the packet along a post-convergence path.

The preceding solution can protect only a PLR against microloops in a traffic switchover scenario. This is because only a PLR can enter the TI-LFA FRR process and forward packets along a TI-LFA backup path. In addition, this solution applies only to single point of failure scenarios. In a multiple points of failure scenario, the TI-LFA backup path may also be adversely affected and therefore cannot be used for packet forwarding.

If microloop avoidance is configured on the ingress in a traffic switchover scenario, a delayed route switchover can be performed when the following conditions are met:
  • A local interface fails, or BFD goes down.
  • No new network topology change occurs during the delay.
  • A backup next hop is available for the involved route.
  • The outbound interface of the primary next hop of the involved route is the failed interface.
  • During the multi-node route convergence delay, if the source node advertising the route is changed, delay measurement stops.
SR-MPLS Local Microloop Avoidance in a Traffic Switchback Scenario

Besides traffic switchover scenarios, microloops may also occur in traffic switchback scenarios. The following uses the network shown in Figure 1-2631 as an example to describe how a microloop occurs in a traffic switchback scenario. The process is as follows:

  1. Node A sends a packet to destination node F along the path A->B->C->E->F. If the B-C link fails, node A sends the packet to destination node F along the post-convergence path A->B->D->E->F.
  2. After the B-C link recovers, a node (for example, node D) first converges.
  3. When receiving the packet sent from node A, node B has not yet converged and therefore still forwards the packet to node D along the pre-recovery path, as shown in Figure 1-2631.
  4. Because node D has already converged, it forwards the packet to node B along the post-recovery path, resulting in a microloop between the two nodes.

Traffic switchback does not involve the TI-LFA FRR process. Therefore, delayed convergence cannot be used for microloop avoidance in such scenarios.

Figure 1-2631 SR-MPLS local microloop in a traffic switchback scenario

According to the preceding process, a transient loop occurs when node D converges earlier than node B during recovery. Node D is unable to predict link up events on the network and so is unable to pre-compute any loop-free path for such events. To avoid loops that may occur during traffic switchback, node D needs to be able to converge to a loop-free path.

On the network shown in Figure 1-2632, after node D detects that the B-C link goes up, it computes the D->B->C->E->F path to destination node F.

Since the B-C link up event does not affect the path from node D to node B, it can be proved that the path is loop-free.

Topology changes triggered by a link up event affect only the post-convergence forwarding path that passes through the link. As such, if the post-convergence forwarding path from node D to node B does not pass through the B-C link, it is not affected by the B-C link up event. Similarly, topology changes triggered by a link down event affect only the pre-convergence forwarding path that passes through the link.

A loop-free path from node D to node F can be constructed without specifying the path from node D to node B. Similarly, because the path from node C to node F is not affected by the B-C link up event, it is definitely loop-free. In this scenario, only the path from node B to node C is affected. Given this, to compute the loop-free path from node D to node F, only a path from node B to node C needs to be specified. According to the preceding analysis, a loop-free path from node D to node F can be formed only by inserting the node SID of node B and the adjacency SID that instructs packet forwarding from node B to node C into the post-convergence path of node D.

Figure 1-2632 SR-MPLS local microloop avoidance in a traffic switchback scenario

After microloop avoidance is deployed, the convergence process is as follows:

  1. After the B-C link recovers, a node (for example, node D) first converges.
  2. Node D starts the timer T1. Before T1 expires, node D computes a microloop avoidance segment list <102, 16023> for the packet destined for node F.
  3. When receiving the packet sent from node A, node B has not yet converged and therefore still forwards the packet to node D along the pre-recovery path.
  4. Node D inserts the microloop avoidance segment list <102, 16023> into the packet and forwards the packet to node B.

    Although the packet is sent from node B to node D and then back to node B, no loop occurs because node D has inserted the segment list <102, 16023> into the packet.

  5. According to the instructions bound to the node SID 102 and the adjacency SID 16023, node B forwards the packet to node C through the outbound interface specified by the adjacency SID and then removes the outer label 16023.
  6. According to the node SID 106, node C forwards the packet to destination node F along the shortest path.

As previously described, node D inserts the microloop avoidance segment list <102, 16023> into the packet, avoiding loops.

When T1 of node D expires, other nodes on the network have already converged, allowing node A to forward the packet along the post-convergence path A->B->C->E->F.

SR-MPLS Remote Microloop Avoidance in a Traffic Switchover Scenario

In a traffic switchover scenario, a remote microloop may also occur between two nodes on a packet forwarding path if the node close to the failure point converges earlier than one farther from the point. The following uses the network shown in Figure 1-2633 as an example to describe how a remote microloop occurs in a traffic switchover scenario. The process is as follows:

  1. After detecting a C-E link failure, node G first converges, whereas node B has not yet converged.
  2. Nodes A and B forward the packet to node G along the path used before the failure occurs.
  3. Because node G has already converged, it forwards the packet to node B according to the next hop of the corresponding route, resulting in a microloop between the two nodes.
Figure 1-2633 SR-MPLS remote microloop in a traffic switchover scenario

To minimize computation workload, a network node typically pre-computes a loop-free path only when a directly connected link or node fails. That is, no loop-free path can be pre-computed against any other potential failure on the network. Given this, the microloop can be avoided only by installing a loop-free path after node G converges.

As previously mentioned, topology changes triggered by a link down event affect only the pre-convergence forwarding path that passes through the link. Therefore, if the path from a node to the destination node does not pass through the failed link before convergence, it is absolutely not affected by the link failure. According to the topology shown in Figure 1-2633, the path from node G to node D is not affected by the C-E link failure. Therefore, this path does not need to be specified for computing a loop-free path from node G to node F. Similarly, the path from node E to node F is not affected by the C-E link failure, and therefore does not need to be specified, either. Because only the path from node D to node E is affected by the C-E link failure, you only need to specify the node SID 104 of node D and the adjacency SID 16045 identifying the path from node D to node E to determine the loop-free path, as shown in Figure 1-2634.

Figure 1-2634 SR-MPLS remote microloop avoidance in a traffic switchover scenario

After microloop avoidance is deployed, the convergence process is as follows:

  1. After detecting a C-E link failure, node G first converges.
  2. Node G starts the timer T1. Before T1 expires, node G computes a microloop avoidance segment list <104, 16045> for the packet destined for node F.
  3. When receiving the packet sent from node A, node B has not yet converged and therefore still forwards the packet to node G along the path used before the failure occurs.
  4. Node G inserts the microloop avoidance segment list <104, 16045> into the packet and forwards the packet to node B.

    Although the packet is sent from node B to node G and then back to node B, no loop occurs because node G has inserted the segment list <104, 16045> into the packet.

  5. According to the instruction bound to the node SID 104, node B forwards the packet to node D.
  6. According to the instruction bound to the adjacency SID 16045, node D forwards the packet to node E through the outbound interface specified by the SID and then removes the outer label 16045.
  7. According to the node SID 106, node E forwards the packet to destination node F along the shortest path.

As previously described, node G inserts the microloop avoidance segment list <104, 16045> into the packet, avoiding loops.

When T1 of node G expires, other nodes on the network have already converged, allowing node A to forward the packet along the post-convergence path A->B->D->E->F.

SR-MPLS Remote Microloop Avoidance in a Traffic Switchback Scenario

The following uses the network shown in Figure 1-2633 as an example to describe how a remote microloop occurs in a traffic switchback scenario. The process is as follows:

  1. After the C-E link recovers, node B first converges, whereas node G has not yet converged.
  2. Node B forwards the packet to node G.
  3. Because node G has not yet converged, it still forwards the packet to node B along the pre-recovery path, resulting in a microloop between the two nodes.

To avoid the microloop, start the timer T1 on node B. Then, before T1 expires, node B inserts the node SID of node G and the adjacency SID identifying the path between nodes G and C into the packet destined for node F to ensure that the packet can be forwarded to node C. In this way, node C can forward the packet to destination node F according to the node SID 106 along the shortest path.

Microloop Avoidance for Common IP Routes

The preceding microloop avoidance functions mainly apply to SR-MPLS routes carrying prefix SIDs. In addition to those routes, common IP routes that do not carry prefix SIDs may also encounter loops during unordered IS-IS convergence. To address this issue, microloop avoidance for common IP routes is introduced.

This function is implemented in the same way as microloop avoidance for SR-MPLS routes. Specifically, during IP forwarding, an SR-MPLS label stack is added to packets on convergence nodes, so that the packets are forwarded over a strict path within a period of time. After IS-IS convergence is completed, the packets are switched to the shortest path for forwarding, thereby effectively avoiding microloops.

Microloop Avoidance in a Scenario Where Multiple SR-MPLS Nodes Advertise the Same Route

The following uses Figure 1-2635 as an example to describe how microloop avoidance is implemented in a scenario where multiple SR-MPLS nodes advertise the same route.

Nodes C and F import the same SR-MPLS route carrying a prefix SID. After detecting a B-C link fault, a node (for example, node B) first completes convergence, whereas node A has not yet converged. Node A forwards a packet to node B along the path used before the fault occurs. Because node B has completed convergence, it forwards the packet to node A according to the next hop of the corresponding route, resulting in a microloop between the two nodes.

Figure 1-2635 Microloop avoidance in a scenario where multiple SR-MPLS nodes advertise the same route

To address the preceding issue, use the microloop avoidance function that applies to scenarios where multiple nodes advertise the same route. When the preferred destination node of the packet sent to 10.10.10.10/32 is changed from node C to node F but the convergence path from node B to node E does not change, the timer T1 can be started on node B. Before T1 expires, the node SID of node E and the adjacency SID between nodes E and F are added to the packet destined for node F to ensure that the packet can be forwarded to node F.

SR-MPLS OAM

SR-MPLS operation, administration, and maintenance (OAM) monitors LSP connectivity and rapidly detects failures. It is mainly implemented through ping and tracert functions.

During an SR-MPLS ping or tracert test, MPLS Echo Request packets are forwarded based on MPLS labels in the forward direction, and MPLS Echo Reply packets are returned over a multi-hop IP path. You can specify a source IP address for a ping or tracert test. If no source IP address is specified, the local MPLS LSR ID is used as the source IP address. The returned MPLS Echo Reply packets use the source IP address in the MPLS Echo Request packets as their destination IP address.

SR-MPLS BE Ping
On the network shown in Figure 1-2636, PE1, P1, P2, and PE2 all support SR-MPLS. An SR-MPLS BE tunnel is established between PE1 and PE2.
Figure 1-2636 SR-MPLS BE ping/tracert
The process of initiating a ping test on the SR-MPLS BE tunnel from PE1 is as follows:
  1. PE1 initiates a ping test and checks whether the specified tunnel is of the SR-MPLS BE IPv4 type.
    • If the tunnel type is not SR-MPLS BE IPv4, PE1 reports an error message indicating a tunnel type mismatch and stops the ping test.
    • If the tunnel type is SR-MPLS BE IPv4, PE1 continues with the following operations.
  2. PE1 constructs an MPLS Echo Request packet encapsulating the outer label of the initiator and carrying destination address 127.0.0.0/8 in the IP header of the packet.
  3. PE1 forwards the packet to P1. After receiving the packet, P1 swaps the outer MPLS label of the packet and forwards the packet to P2.
  4. Similar to P1, P2 swaps the outer MPLS label of the received packet and determines whether it is the penultimate hop. If yes, P2 removes the outer label and forwards the packet to PE2. PE2 sends the packet to the Rx/Tx module for processing.
  5. PE2 returns an MPLS Echo Reply packet to PE1 and generates the ping test result.
SR-MPLS BE Tracert
On the network shown in Figure 1-2636, the process of initiating a tracert test on the SR-MPLS BE tunnel from PE1 is as follows:
  1. PE1 initiates a tracert test and checks whether the specified tunnel is of the SR-MPLS BE IPv4 type.
    • If the tunnel type is not SR-MPLS BE IPv4, PE1 reports an error message indicating a tunnel type mismatch and stops the tracert test.
    • If the tunnel type is SR-MPLS BE IPv4, PE1 continues with the following operations.
  2. PE1 constructs an MPLS Echo Request packet encapsulating the outer label of the initiator and carrying destination address 127.0.0.0/8 in the IP header of the packet.
  3. PE1 forwards the packet to P1. After receiving the packet, P1 determines whether the TTL–1 value in the outer label of the packet is 0.
    • If the TTL–1 value is 0, an MPLS TTL timeout occurs, and P1 sends the packet to the Rx/Tx module for processing.
    • If the TTL–1 value is greater than 0, P1 swaps the outer MPLS label of the packet, searches the forwarding table for the outbound interface, and forwards the packet to P2.
  4. Similar to P1, P2 also performs the following operations:
    • If the TTL–1 value is 0, an MPLS TTL timeout occurs, and P2 sends the packet to the Rx/Tx module for processing.
    • If the TTL–1 value is greater than 0, P2 swaps the outer MPLS label of the received packet and determines whether it is the penultimate hop. If yes, P2 removes the outer label, searches the forwarding table for the outbound interface, and forwards the packet to PE2.
  5. PE2 sends the packet to the Rx/Tx module for processing, returns an MPLS Echo Reply packet to PE1, and generates the tracert test result.
SR-MPLS TE Ping
On the network shown in Figure 1-2637, PE1, P1, and P2 all support SR-MPLS. An SR-MPLS TE tunnel is established between PE1 and PE2. The devices assign adjacency labels as follows:
  • PE1 assigns the adjacency label 9001 to the PE1-P1 adjacency.
  • P1 assigns the adjacency label 9002 to the P1-P2 adjacency.
  • P2 assigns the adjacency label 9005 to the P2-PE2 adjacency.
Figure 1-2637 SR-MPLS TE ping/tracert
The process of initiating a ping test on the SR-MPLS TE tunnel from PE1 is as follows:
  1. PE1 initiates a ping test and checks whether the specified tunnel is of the SR-MPLS TE type.
    • If the tunnel type is not SR-MPLS TE, PE1 reports an error message indicating a tunnel type mismatch and stops the ping test.
    • If the tunnel type is SR-MPLS TE, PE1 continues with the following operations.
  2. PE1 constructs an MPLS Echo Request packet encapsulating label information about the entire tunnel and carrying destination address 127.0.0.0/8 in the IP header of the packet.
  3. PE1 forwards the packet to P1. After receiving the packet, P1 removes the outer label 9002 and forwards the packet to P2.
  4. P2 removes the outer label 9005 of the received packet and forwards the packet to PE2 for processing.
  5. PE2 returns an MPLS Echo Reply packet to PE1.
SR-MPLS TE Tracert
On the network shown in Figure 1-2637, the process of initiating a tracert test on the SR-MPLS TE tunnel from PE1 is as follows:
  1. PE1 initiates a tracert test and checks whether the specified tunnel is of the SR-MPLS TE type.
    • If the tunnel type is not SR-MPLS TE, PE1 reports an error message indicating a tunnel type mismatch and stops the tracert test.
    • If the tunnel type is SR-MPLS TE, PE1 continues with the following operations.
  2. PE1 constructs an MPLS Echo Request packet encapsulating label information about the entire tunnel and carrying destination address 127.0.0.0/8 in the IP header of the packet.
  3. PE1 forwards the packet to P1. After receiving the packet, P1 determines whether the TTL–1 value in the outer label of the packet is 0.
    • If the TTL–1 value is 0, an MPLS TTL timeout occurs, and P1 sends the packet to the Rx/Tx module for processing.
    • If the TTL–1 value is greater than 0, P1 removes the outer MPLS label of the packet, buffers the TTL–1 value, copies the value to the new outer MPLS label, searches the forwarding table for the outbound interface, and forwards the packet to P2.
  4. Similar to P1, P2 also determines whether the TTL–1 value in the outer label of the received packet is 0.
    • If the TTL–1 value is 0, an MPLS TTL timeout occurs, and P2 sends the packet to the Rx/Tx module for processing.
    • If the TTL–1 value is greater than 0, P2 removes the outer MPLS label of the packet, buffers the TTL–1 value, copies the value to the new outer MPLS label, searches the forwarding table for the outbound interface, and forwards the packet to PE2.
  5. P2 forwards the packet to PE2, and PE2 returns an MPLS Echo Reply packet to PE1.

MPLS in UDP

MPLS in UDP Fundamentals
As shown in Figure 1-2638, the controller manages both the DCNs and DCI WAN. Each gateway (GW) sends a request to the controller to establish an end-to-end optimal path, and the controller then returns path computation results to the GW. SR-MPLS is deployed on the DCNs to simplify network deployment and management. In addition, traffic needs to traverse the MANs and DCI WAN so that DCN A and DCN B can communicate with each other. The DCI WAN is newly built and supports SR. However, the MANs are legacy networks and do not support SR. For this reason, SR-MPLS traffic cannot be forwarded on the MANs.
Figure 1-2638 Typical usage scenario of MPLS in UDP

MPLS in UDP is a DCN overlay technology that encapsulates MPLS or SR-MPLS packets into UDP packets, allowing the packets to traverse some networks that do not support MPLS or SR-MPLS. MPLS in UDP solves the problem of bearer protocol conversion between the DCN and WAN and unifies the tunnel protocols run on the DCN and WAN.

In Figure 1-2638, an MPLS-in-UDP tunnel is established between GW1 and DCI-PE1 using the MPLS in UDP technology. In this way, traffic from DCN A to DCN B can be carried over SR, implementing end-to-end traffic optimization. The data forwarding process is as follows:
  1. Device 1 encapsulates the packet sent from VMa1 to VMb1 into a VXLAN packet and sends the VXLAN packet to the DCN gateway GW1.

  2. After receiving the VXLAN packet, GW1 on DCN A performs SR-MPLS encapsulation according to the path computation result of the controller. Because MAN A does not support SR, GW1 encapsulates the SR-MPLS packet into a UDP packet and steers it into the MPLS-in-UDP tunnel. The MAN forwards the packet to DCI-PE1 through UDP.

  3. After receiving the SR-MPLS packet, DCI-PE1 forwards it to DCI-PE2 through SR-MPLS TE.

  4. DCI-PE2 forwards the packet to MAN B over an IP route.

  5. MAN B forwards the packet to GW2 over an IP route.

  6. GW2 forwards the packet to Device3 over an IP route. After receiving the packet, Device3 performs VXLAN decapsulation and then sends it to VMb1.

To prevent attacks initiated using invalid UDP packets, MPLS in UDP supports source IP address (the address of an MPLS-in-UDP tunnel ingress) verification. The packets are accepted only when the source IP address verification succeeds. Otherwise, the packets are discarded. In Figure 1-2638, source IP address verification can be enabled on DCI-PE1, which is the egress of an MPLS-in-UDP tunnel. After DCI-PE1 receives an MPLS-in-UDP packet, it verifies the source IP address of the packet according to the configured rule, improving transmission security.

If a device serves as the egress nodes of different MPLS-in-UDP tunnels, a valid source IP address list must be configured based on specified local IP addresses.

MPLS in UDP6 Fundamentals

As shown in Figure 1-2639, SR-MPLS is deployed on the cloud backbone network, which is connected to a DCN through an IPv6 fabric network.

MPLS in UDP6 is a DCN overlay technology that encapsulates MPLS or SR-MPLS packets into UDP6 packets, allowing the packets to traverse some networks that do not support MPLS or SR-MPLS.
Figure 1-2639 Typical usage scenario of MPLS in UDP6
In Figure 1-2639, an MPLS-in-UDP6 tunnel is established between GW1 and PE1 using the MPLS in UDP6 technology. In this way, traffic from the DCN to the Internet can be carried over SR, implementing end-to-end traffic optimization. The data forwarding process is as follows:
  1. The DCN gateway GW1 sends an original IPv6 packet to Device3.

  2. Device3 performs VPN encapsulation and sends the packet to PE1 over the MPLS-in-UDP6 tunnel.

  3. PE1 identifies the outer IPv6 destination address of the packet, searches the IPv6 routing table, encapsulates the packet with a 6PE label, and then forwards the packet to the ASBR through SR-MPLS TE.

  4. The ASBR performs 6PE decapsulation and then MPLS-in-UDP6 decapsulation, searches the IPv6 routing table of the corresponding VPN instance, and sends the IPv6 packet to PE2 accordingly.

MPLS in UDP/MPLS in UDP6 Packet Encapsulation Format
Figure 1-2640 shows the packet encapsulation format used in MPLS in UDP/MPLS in UDP6.
Figure 1-2640 MPLS in UDP/MPLS in UDP6 packet encapsulation format
Table 1-1070 Fields in an MPLS in UDP/MPLS in UDP6 packet

Field

Length

Description

Source Port = Entropy

16 bits

Source port number generated by the device that encapsulates packets. The value is an integer ranging from 49152 to 65535.

Dest Port = MPLS

16 bits

Destination port number, which is fixed at 6635 to indicate MPLS in UDP/MPLS in UDP6

UDP Length

16 bits

UDP packet length

UDP Checksum

16 bits

UDP checksum

MPLS Label Stack

Variable

MPLS label stack in the packet

Message Body

Variable

MPLS packet content

SR-MPLS TTL

The TTL field in MPLS packets transmitted over public SR-MPLS TE and SR-MPLS BE tunnels are processed in either of the following modes:

  • Uniform mode

    When an IP packet passes through an SR-MPLS network, the IP TTL decreases by 1 and is mapped to the MPLS TTL field on the ingress PE1. Then, the MPLS TTL decreases by 1 each time the packet passes through a node on the SR-MPLS network. After the packet arrives at the egress PE2, the MPLS TTL decreases by 1 and is compared with the IP TTL value. Then, the smaller value is mapped to the IP TTL field. Figure 1-2641 shows how the TTL is processed in uniform mode.

    Figure 1-2641 TTL processing in uniform mode
  • Pipe mode

    When an IP packet passes through an SR-MPLS network, the IP TTL decreases by 1 and the MPLS TTL is a fixed value on the ingress PE1. Then, the MPLS TTL decreases by 1 each time the packet passes through a node on the SR-MPLS network. After the packet arrives at the egress PE2, the IP TTL decreases by 1. To summarize, in pipe mode, the IP TTL in a packet decreases by 1 only on the ingress PE1 and egress PE2 no matter how many hops exist between the ingress and egress on an SR-MPLS network. Figure 1-2642 shows how the TTL is processed in pipe mode.

    Figure 1-2642 TTL processing in pipe mode

Application Scenarios for Segment Routing MPLS

Single-AS SR-MPLS TE

Manual Configuration + PCEP Delegation
On the network shown in Figure 1-2643, labels are allocated on the device side and advertised through an IGP SR extension. Network topology and label information is collected through BGP-LS. An explicit path is manually configured on forwarders to establish an SR-MPLS TE tunnel. If the explicit path is configured in strict label stack mode, controller-based path computation is not required. If it is configured in other modes, the controller to which the associated tunnel can be delegated needs to perform path computation through PCEP and deliver information about the path label stack to the forwarders to guide data forwarding.
Figure 1-2643 Manual configuration + PCEP delegation
Controller-based Path Computation + PCEP Delegation
On the network shown in Figure 1-2644, labels are allocated on the device side and advertised through an IGP SR extension. Network topology and label information is collected through BGP-LS and then sent to the controller for SR tunnel path computation. After computing such a path, the controller delivers associated information through NETCONF. In this mode, the tunnel can be delegated to the controller, which then delivers information about the path label stack through PCEP to forwarders to guide data forwarding.
Figure 1-2644 Controller-based path computation

DCI services on the network are carried over an L3VPN in per-tenant per-VPN mode. Within each DC, tenants are isolated, and the DC gateway accesses the L3VPN through a VLAN sub-interface. Tenants' VPN services are carried over the primary and backup SR-MPLS TE paths.

Inter-AS E2E SR-MPLS TE

Service Overview

Future networks will be 5G oriented. Transport networks need to be adjusted to meet 5G requirements for network simplification, low latency, and SDN and network functions virtualization (NFV) implementation. E2E SR-MPLS TE can carry VPN services through a complete inter-AS tunnel, which greatly reduces networking and O&M costs and meets carriers' requirement for unified O&M.

Networking Description

Figure 1-2645 illustrates the typical application of inter-AS E2E SR-MPLS TE on a transport network. The overall service model of the network is EVPN over SR-MPLS TE, with an E2E SR-MPLS TE tunnel as the transmission channel and EVPN as the data service model. Being oriented to 5G, this network features simplified solutions, protocols, and tunnels, as well as unified reliability solutions. With the help of a network controller, this network can quickly respond to upper-layer service requirements.

Figure 1-2645 Application of inter-AS E2E SR-MPLS TE on a transport network
Service Deployment
When an inter-AS E2E SR-MPLS TE network is used to carry EVPN services, the specific service deployment is as follows:
  • Configure IGP SR within ASs, establish intra-AS SR-MPLS TE tunnels, and configure BGP EPE between ASs to allocate inter-AS SIDs. In addition, use technologies, such as the binding SID technology, to combine multiple intra-AS SR-MPLS TE tunnels into an inter-AS E2E SR-MPLS TE tunnel.

  • Deploy EVPN to carry various services, including EVPN VPWS, EVPN VPLS, and EVPN L3VPN. Besides EVPN services, traditional BGP L3VPN services can also be smoothly switched to E2E SR-MPLS TE tunnels for transmission.

  • In terms of reliability, for intra-AS SR-MPLS TE tunnels, deploy BFD/SBFD to detect faults and TE hot-standby (HSB) to switch traffic between primary and backup TE LSPs; for inter-AS E2E SR-MPLS TE tunnels, deploy one-arm BFD to detect faults, TE HSB to switch traffic between primary and backup TE LSPs, and VPN FRR to switch traffic between primary and backup E2E SR-MPLS TE tunnels.

SR-MPLS TE Policy Application

On the network shown in Figure 1-2646, DCI services are carried over an L3VPN in per tenant per VPN mode. Within each DC, tenants are isolated, and the DC gateway accesses the L3VPN through a VLAN sub-interface. The VPN services of the tenants are carried using different SR-MPLS TE Policies.
Figure 1-2646 SR-MPLS TE Policy application
The key items to be configured are as follows:
  • IGP SR: IGPs are extended to support SR-related functions, such as advertising SR capabilities, allocating labels, and propagating label information.
  • BGP-LS: BGP-LS is mainly used to collect network topology and label information and report the information to a controller for SR-MPLS TE Policy computation.
  • BGP SR-MPLS TE Policy address family-specific BGP peer relationship: Such a peer relationship needs to be established between the controller and PE, so that the controller can deliver an SR-MPLS TE Policy route to the PE through the peer relationship to direct traffic forwarding. A NETCONF neighbor relationship can also be configured between the controller and PE, so that you can configure an SR-MPLS TE Policy through NETCONF.

Terminology for Segment Routing MPLS

Terms

Term

Definition

SR-MPLS BE

A technology that allows an IGP to run the shortest path first (SPF) algorithm to compute an optimal SR LSP on an SR-MPLS network.

SR-MPLS TE

A technology that allows SR tunnels to be created based on TE constraints on an SR-MPLS network.

Acronyms and Abbreviations

Acronym and Abbreviation

Full Name

BGP-LS

BGP Link-State

E2E

end to end

FRR

fast re-route

NETCONF

Network Configuration Protocol

PCE

path computation element

PCEP

Path Computation Element Communication Protocol

SID

segment ID

SR

Segment Routing

SRGB

Segment Routing global block

TE

traffic engineering

TI-LFA FRR

topology-independent loop-free alternate FRR

Segment Routing MPLS Configuration

This chapter describes the basic principles, configuration procedures, and configuration examples of Segment Routing (SR) MPLS.

Overview of Segment Routing MPLS

Segment Routing (SR) is a protocol designed to forward data packets using the source routing model. Segment Routing MPLS (SR-MPLS) is implemented based on the MPLS forwarding plane and is referred to as SR hereinafter. SR divides a network path into several segments and allocates a segment ID (SID) to each segment and forwarding node. The segments and nodes are then sequentially arranged into a segment list to form a forwarding path.

SR encapsulates segment list information that identifies a forwarding path into the packet header for transmission. After receiving a packet, the receive end parses the segment list. If the SID at the top of the segment list identifies the local node, the node removes the SID and executes the follow-up procedure. If the SID at the top does not identify the local node, the node forwards the packet to the next hop in equal cost multiple path (ECMP) mode.

SR offers the following benefits:
  • Simplified control plane of the MPLS network

    SR uses a controller or an IGP to uniformly compute paths and allocate labels, without requiring tunneling protocols such as RSVP-TE and LDP. In addition, it can be directly used in the MPLS architecture, without requiring any changes to the forwarding plane.

  • Efficient Topology-Independent Loop-free Alternate (TI-LFA) FRR protection for fast recovery of path failures

    On the basis of remote loop-free alternate (RLFA) FRR, SR provides TI-LFA FRR, which offers node and link protection for all topologies and addresses the weaknesses in conventional tunnel protection technologies.

  • Higher network capacity expansion capabilities

    MPLS TE is a connection-oriented technology that requires devices to exchange and process numerous packets in order to maintain connection states, burdening the control plane. In contrast, SR can control any service path by performing label operations for packets on only the ingress. Because SR does not require transit nodes to maintain path information, it frees up the control plane.

    Moreover, the SR label quantity on a network is the sum of the node quantity and local adjacency quantity, meaning that it is related only to the network scale, rather than the tunnel quantity or service volume.

  • Smoother evolution to SDN networks

    Because SR is designed based on the source routing concept, the ingress controls packet forwarding paths. Moreover, SR can work with the centralized path computation module to flexibly and easily control and adjust paths.

    Given that SR supports both legacy and SDN networks and is compatible with existing devices, it enables existing networks to evolve smoothly to SDN networks in a non-disruptive manner.

Configuration Precautions for Segment Routing MPLS

Feature Requirements

Table 1-1071 Feature requirements

Feature Requirements

Series

Models

BFD is used to detect the reliability of segment lists in an SR-MPLS TE Policy. If all segment lists of the primary candidate path fail, traffic is switched to the backup candidate path. If all segment lists of the backup candidate path fail, BFD goes Down, triggering the SR-MPLS TE Policy to go Down. As a result, VPN FRR switching occurs. In this case, no new candidate path will be selected for traffic forwarding even if there are other candidate paths in the SR-MPLS TE Policy.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The egress of an MPLS-in-UDP6 tunnel does not support fragment reassembly.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The egress of an MPLS-in-UDP6 tunnel can implement behavior aggregate or multi-field classification on IP packets, but cannot do so on MPLS packets.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The egress of an MPLS-in-UDP6 tunnel does not support MAC address-based hashing.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The value of SR Policy Name is a string of 1 to 255 case-sensitive characters without spaces.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The value of SR Policy Candidate Path Name is a string of 1 to 255 case-sensitive characters without spaces.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

In SR-MPLS TE ingress over GRE scenarios, GRE header compensation is not performed during TE tunnel statistics collection. The number of bytes and rate of the TE tunnel are different from the actual traffic. You can view GRE tunnel statistics or traffic statistics on the actual outbound interface.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If the configured static BFD for SR-MPLS TE LSP is incomplete, the SR-MPLS TE LSP goes Down. If a dynamic BFD for SR-MPLS TE LSP is configured before an incomplete static BFD for SR-MPLS TE LSP, the SR-MPLS TE LSP may remain Up. However, after an active/standby switchover is performed or the device is restarted, the SR-MPLS TE LSP goes Down.

If the configured static BFD for SR-MPLS TE tunnel is incomplete, the SR-MPLS TE tunnel goes Down. If dynamic BFD for SR-MPLS TE tunnel is configured first, the SR-MPLS TE tunnel may remain Up. However, after an active/standby switchover is performed or the device is restarted, the SR-MPLS TE tunnel goes Down.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If the candidate path of an SR-MPLS TE Policy dynamically created by ODN conflicts with that of a statically configured SR-MPLS TE Policy, the static candidate path takes effect. When the static candidate path is deleted, ODN candidate path creation is restored, and the ODN candidate path is delegated for path computation. If the static candidate path is deleted before the new path is computed, service traffic may be interrupted.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

A maximum of five levels of load balancing may be performed in SR-MPLS TE Policy multi-level load balancing scenarios:

Level 1: VPN ECMP

Level 2: Load balancing among multiple SR-MPLS TE Policies in an SR-MPLS TE Policy group

Level 3: Load balancing among multiple segment lists in an SR-MPLS TE Policy

Level 4: Load balancing based on node/adjacency SIDs in a segment list

Level 5: Load balancing among the member interfaces of an Eth-Trunk, which functions as the outbound interface corresponding to a specified adjacency SID

In scenarios with more than three levels of load balancing, if the number of channels involved in level-1 load balancing and that involved in level-4 load balancing are integer multiples of each other and the hash factors used for the two levels of load balancing are the same, level-4 load balancing fails.

This limitation also applies to level-2 and level-5 load balancing.

In this case, you can run the load-balance hash-arithmetic command to change the hash algorithm to the XOR algorithm in order to add one level of load balancing. For details, see the description of the load-balance hash-arithmetic command in the product documentation.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When SR-MPLS TE Policy service packets are encapsulated with more than 13 SIDs (for example, in an SR-MPLS TE Policy TI-LFA scenario), the downstream device cannot parse the IP 5-tuple of the packets, causing uneven load balancing.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When binding SIDs are used for inter-AS E2E SR-MPLS TE tunnels, the following restrictions exist:

1. When the number of layers in a label stack exceeds 3, the tunnel does not go Up.

2. If a one-layer label stack is configured and the label type is Binding SID, the packet priority mapping, statistics collection, MTU, and TTL functions of the tunnel are not supported.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If an SR-MPLS TE tunnel uses a BGP-EPE label as the first label and the BGP-EPE label corresponds to multiple outbound interfaces, TE bandwidth limit is not supported.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

Segment Routing MPLS ingress and egress nodes cannot be configured as anycast protection nodes. Anycast protection means that multiple devices are configured with the same prefix SID.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

Segment Routing tunnels support only upstream NetStream sampling on the ingress and NetStream sampling on the egress, but do not support downstream NetStream sampling on the ingress or NetStream sampling on a transit node.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

SR-MPLS TE Policies do not support recursion between tunnels.

1. SR-MPLS TE Policies cannot recurse to other types of tunnels (including SR-MPLS TE Policy tunnels).

2. Other types of tunnels cannot recurse to SR-MPLS TE Policies.

Incorrect configuration will cause traffic interruption.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

Segment Routing tunnels established using node SIDs do not support rate limiting.

When the controller delivers SR tunnels established based on node SIDs, bandwidth must be guaranteed.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

SR-MPLS TE tunnels established using adjacency SIDs do not support SR-MPLS TE anycast protection.

Anycast protection can be configured for SR-MPLS TE tunnels consisting of node SIDs.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The function of detecting BGP EPE outbound interface faults and triggering service switching through SBFD for SR-MPLS TE tunnel can be used only when the following conditions are met:

1. The destination of the SR-MPLS TE tunnel is the local ASBR node of BGP EPE.

2. The innermost label in an SR-MPLS TE label stack is a BGP EPE label.

3. SBFD for SR-MPLS TE tunnel has been deployed.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The function of detecting BGP EPE outbound interface faults and triggering service switching through SBFD for SR-MPLS TE tunnel poses the following risks if seven or more BGP EPE links fail at the same time in BGP EPE load balancing scenarios:

1. Although traffic can continue to be forwarded, SR-MPLS TE SBFD goes down, triggering traffic convergence or switching on the ingress of an SR-MPLS TE tunnel.

2. Data packets are discarded, but SBFD is still up. As a result, traffic recovers until BGP EPE convergence is complete.

Do not deploy seven or more BGP EPE links for load balancing.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The function of detecting BGP EPE outbound interface faults and triggering service switching through SBFD for SR-MPLS TE tunnel applies only to scenarios where inter-AS public IP unicast packets are transmitted over SR-MPLS TE tunnels.

Suggestions:

1. Disable the MPLS TE IGP shortcut function on SR-MPLS TE tunnel interfaces to prevent unexpected public IP traffic from recursing to inter-AS SR-MPLS TE tunnels.

2. Configure redirection to steer inter-AS traffic to inter-AS SR-MPLS TE tunnels.

3. Run the mpls te reserved-for-binding command on SR-MPLS TE tunnel interfaces to prevent VPN traffic from recursing to inter-AS SR-MPLS TE tunnels.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

A maximum of 64 BGP EPE links can be configured for load balancing. If more than 64 BGP EPE links exist, the first 64 links are selected based on the OutIfIndex values of outbound interfaces in descending order. If the sixty-fourth and sixty-fifth links have the same OutIfIndex value, the one with the largest NextHop value is selected.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

SR-MPLS TE Policy load balancing based on segment list weights has the following restrictions:

1. The load balancing precision is 1/128.

2. The segment list whose weight percentage is less than 1/128 does not forward traffic.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

In a scenario where SR-MPLS TE Policy load balancing is implemented based on segment list weights, segment list weight adjustment has the following restrictions:

1. If SBFD for Segment List is not configured and the segment list weight is changed from a non-zero value to 0, packet loss may occur.

2. If SBFD for Segment List is not configured and the segment list weight is changed from 0 to a non-zero value, packet loss may occur.

3. If SBFD for Segment List is configured and the segment list weight is changed from a non-zero value to 0, the protocol side has a deletion delay (the default deletion delay is 20s and can be configured). A proper deletion delay can prevent packet loss.

4. If SBFD for Segment List is configured and the segment list weight is changed from 0 to a non-zero value, packet loss may occur.

You are advised to configure SBFD for Segment List and run the sr-te-policy delete-delay command to adjust the delay after which the segment list is deleted in a scenario where SBFD goes Down. This reduces or prevents packet loss when the segment list weight is changed from a non-zero value to 0.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

Path verification cannot be enabled on the ingress of an inter-AS SR-MPLS TE tunnel. If path verification is enabled, LSPs cannot be created due to path verification failures. Ensure that the mpls te path verification enable command is not run. By default, the function is disabled.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If SBFD is enabled after SR-MPLS TE Policy configuration, the candidate paths under the SR-MPLS TE Policy and the segment lists of the candidate paths are initially up. However, if SBFD is not configured on the reflector end, the segment list state is set to down after the BFD negotiation timeout period (5 minutes) expires.

You are advised to enable SBFD before SR-MPLS TE Policy configuration. In addition, configure SBFD on the reflector end.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If the first hop of an explicit path is assigned a BSID (indicating that the tunnel that references the explicit path needs to use the tunnel assigned the BSID for forwarding), the first hop of the explicit path used by the tunnel assigned the BSID cannot be configured with a BSID.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If BFD is configured on the ingress to forward return packets over the tunnel corresponding to the specified BSID but the egress does not have the tunnel corresponding to the BSID, the BFD session goes Down.

You are advised to configure the tunnel corresponding to the BSID on the egress to ensure that the BFD return tunnel is Up.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If the outer label of the explicit path label stack of an SR-MPLS TE tunnel is a BSID or the label stack contains a BSID, the depth of the label stack cannot exceed 3 layers. Otherwise, the tunnel cannot go Up.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

CSPF local path computation for SR-MPLS TE tunnels supports node labels. The prefix SID of a node must be unique in an IGP domain.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When a BSID is configured for an explicit path, the system does not check whether the tunnel corresponding to the BSID exists or is Up. If the tunnel does not exist or is not Up, the tunnel established over the explicit path can go Up but fails to forward traffic. Therefore, ensure that the configuration is correct.

You are advised to configure BFD.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

After a BSID is configured for a tunnel and is referenced by an explicit path, the BSID of the tunnel cannot be modified or deleted.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

After the device is restarted, BFD does not process the Up state. If BFD is not Up, the Flex-Algo LSP is set to Down.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When CSPF is used to compute SR-MPLS TE LSPs, if the involved node supports SR and has IGP multi-process configured, ensure that node labels are configured or not configured for all processes on the node. Otherwise, CSPF-based path computation may fail.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

If a BSID is configured for a tunnel and is referenced by an explicit path, the tunnel cannot be deleted.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

After device restart, tunnels are not processed if the BFD state is Up. If the BFD state is not Up, the tunnels are set to Down.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

In a scenario where a device is restarted, if a BFD session or peer is in the Admin Down state, the SR-MPLS TE Policy status is not affected. However, if the BFD session is renegotiated and goes Down, the SR-MPLS TE Policy also goes Down.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

You are advised to configure the same SRGB range on the entire network. Ensure that the SR-MPLS BE prefix/node SID of a device does not exceed the SRGB range of other nodes on the network.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

The SID forwarding entry corresponding to an IP route prefix is generated only when the IP route of the involved process is active. When multiple processes or protocols are deployed, the following restrictions exist. Otherwise, SR-MPLS BE and SR-MPLS TE traffic may fail to be forwarded.

1. Do not advertise the IP route prefix used for SID advertisement in multiple processes or protocols at the same time.

2. If the route must be advertised through multiple processes or protocols at the same time, ensure that the IP route advertised by the protocol or process used for SID advertisement has the highest priority on the device within the SID advertisement range.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When the outer label in the label stack of an SR-MPLS TE explicit path is a binding SID, the label stack depth cannot exceed three layers.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When the automatic bandwidth sampling interval of the PCE-initiated path delivered by the controller is less than 60s, 60s takes effect.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

When the automatic bandwidth adjustment interval configured for a controller-delivered PCE-initiated SR-MPLS TE Policy is less than 300s, the interval of 300s takes effect.

NetEngine 8000 X

NetEngine 8000 X4/NetEngine 8000 X8/NetEngine 8000 X16/NetEngine 8000E X8/NetEngine 8100 X

Configuring an IS-IS SR-MPLS BE Tunnel

This section describes the detail steps for configuring an IS-IS SR-MPLS BE tunnel.

Usage Scenario

Creating an IS-IS SR-MPLS BE tunnel involves the following operations:

  • Devices report topology information to a controller (if the controller is used to create a tunnel) and are assigned labels.

  • The devices compute paths.

Pre-configuration Tasks

Before configuring an IS-IS SR-MPLS BE tunnel, complete the following tasks:

  • Configure IS-IS to implement network layer connectivity for NEs.

Enabling MPLS

Enabling MPLS on each node in an SR-MPLS domain is the prerequisites for configuring an SR-MPLS BE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the local node.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.
    • LSRs do not have default IDs. LSR IDs must be manually configured.
    • Using the address of a loopback interface as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS BE Functions

This section describes how to configure basic SR-MPLS BE functions.

Context

Basic SR-MPLS BE function configurations mainly involve enabling SR globally, specifying an SR-MPLS-specific SRGB range, and configuring an SR-MPLS prefix SID.

Procedure

  1. Enable SR globally.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled globally, and the Segment Routing view is displayed.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  2. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run network-entity net-addr

      A network entity title (NET) is configured.

    4. Run cost-style { wide | compatible | wide-compatible }

      The IS-IS wide metric function is enabled.

    5. Run segment-routing mpls

      IS-IS SR-MPLS is enabled.

    6. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current IS-IS instance.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    7. (Optional) Run segment-routing mpls over gre

      The device is enabled to recurse SR-MPLS routes to GRE tunnels.

    8. Run commit

      The configuration is committed.

    9. Run quit

      Return to the system view.

  3. Configure an SR-MPLS prefix SID.
    1. Run system-view

      The system view is displayed.

    2. Run interface loopback loopback-number

      A loopback interface is created, and the loopback interface view is displayed.

    3. Run isis enable [ process-id ]

      The IS-IS interface is enabled.

    4. Run ip address ip-address { mask | mask-length }

      The IP address is configured for the loopback interface.

    5. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the loopback interface.

    6. Run commit

      The configuration is committed.

(Optional) Configuring Strict SR-MPLS Capability Check

Strict SR-MPLS capability check enables a node, when computing an SR-MPLS BE LSP (SR LSP for short), to check whether all nodes along the SR LSP support SR-MPLS. This prevents forwarding failures caused by the existence of SR-MPLS-incapable nodes on the SR LSP.

Context

If strict SR-MPLS capability check is configured and SR-MPLS-incapable nodes exist on the optimal path from the current node to the destination node, the SR LSP to the destination node fails to be established.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run isis [ process-id ]

    The IS-IS view is displayed.

  3. Run sr-lsp strict-check

    Strict SR-MPLS capability check is configured.

  4. Run commit

    The configuration is committed.

(Optional) Configuring a Policy for Triggering SR-MPLS BE LSP Establishment

A policy can be configured to allow the ingress node to establish SR-MPLS BE LSPs based on eligible routes.

Context

After SR is enabled, a large number of SR-MPLS BE LSPs will be established if policy control is not configured. As a result, resources are wasted. To prevent resource wastes, a policy for establishing SR-MPLS BE LSPs can be configured. The policy allows the ingress to use only allowed routes to establish SR-MPLS BE LSPs.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run isis [ process-id ]

    The IS-IS view is displayed.

  3. Run segment-routing lsp-trigger { none | host | ip-prefix ip-prefix-name }

    A policy is configured for the ingress to establish SR-MPLS BE LSPs.

    • host: Host IP routes with 32-bit masks are used as the policy for the ingress to establish SR-MPLS BE LSPs.

    • ip-prefix: FECs that match an IP prefix list are used as the policy for the ingress to establish SR-MPLS BE LSPs.

    • none: The ingress is not allowed to establish SR-MPLS BE LSPs.

  4. Run commit

    The configuration is committed.

(Optional) Configuring a Policy to Prioritize SR-MPLS BE Tunnels

You can configure a policy to prioritize SR-MPLS BE tunnels so that they can be preferentially selected.

Context

In a tunnel recursion scenario, an LDP tunnel is preferentially selected to forward traffic by default. To enable a device to preferentially select an SR-MPLS BE tunnel, improve the SR-MPLS BE tunnel priority so that the SR-MPLS BE tunnel takes preference over the LDP tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    The SR view is displayed.

  3. Run tunnel-prefer segment-routing

    SR-MPLS BE tunnels are configured to take precedence over LDP tunnels.

  4. Run commit

    The configuration is committed.

Verifying the IS-IS SR-MPLS BE Tunnel Configuration

After configuring an SR-MPLS BE tunnel, verify the configuration of the SR-MPLS BE tunnel.

Prerequisites

The SR-MPLS BE functions have been configured.

Procedure

After completing the configurations, you can run the following commands to check the configurations.

  • Run the display isis lsdb [ { level-1 | level-2 } | verbose | { local | lsp-id | is-name symbolic-name } ] * [ process-id | vpn-instance vpn-instance-name ] command to check IS-IS LSDB information.

  • Run the display segment-routing prefix mpls forwarding command to check the label forwarding information base for Segment Routing.

Configuring an OSPF SR-MPLS BE Tunnel

This section describes how to configure an OSPF SR-MPLS BE tunnel.

Usage Scenario

Creating an OSPF SR-MPLS BE tunnel involves the following operations:

  • Devices report topology information to a controller (if the controller is used to create a tunnel) and are assigned labels.

  • The devices compute paths.

Pre-configuration Tasks

Before configuring an OSPF SR-MPLS BE tunnel, complete the following tasks:

  • Configure OSPF to implement the connectivity of NEs at the network layer.

Enabling MPLS

Enabling MPLS on each node in an SR-MPLS domain is the prerequisites for configuring an SR-MPLS BE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the local node.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.
    • LSRs do not have default IDs. LSR IDs must be manually configured.
    • Using the address of a loopback interface as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS BE Functions

This section describes how to configure basic SR-MPLS BE functions.

Context

Basic SR-MPLS BE function configurations mainly involve enabling SR globally, specifying an SR-MPLS-specific SRGB range, and configuring an SR-MPLS prefix SID.

Procedure

  1. Enable SR globally.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled globally, and the Segment Routing view is displayed.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  2. Configure an SR-MPLS-specific SRGB range.
    1. Run ospf [ process-id ]

      The OSPF view is displayed.

    2. Run opaque-capability enable

      The Opaque LSA capability is enabled.

    3. Run segment-routing mpls

      OSPF SR-MPLS is enabled.

    4. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific OSPF SRGB range is configured.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    5. (Optional) Run segment-routing mpls over gre

      The device is enabled to recurse SR-MPLS routes to GRE tunnels.

    6. Run commit

      The configuration is committed.

    7. Run quit

      Return to the system view.

  3. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the loopback interface view is displayed.

    2. Run ospf enable [ process-id ] area area-id

      OSPF is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      The IP address is configured for the loopback interface.

    4. Run ospf prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the loopback interface.

    5. Run commit

      The configuration is committed.

(Optional) Configuring Strict SR-MPLS Capability Check

Strict SR-MPLS capability check enables a node, when computing an SR-MPLS BE LSP (SR LSP for short), to check whether all nodes along the SR LSP support SR-MPLS. This prevents forwarding failures caused by the existence of SR-MPLS-incapable nodes on the SR LSP.

Context

If strict SR-MPLS capability check is configured and SR-MPLS-incapable nodes exist on the optimal path from the current node to the destination node, the SR LSP to the destination node fails to be established.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run ospf [ process-id ]

    The OSPF view is displayed.

  3. Run sr-lsp strict-check

    Strict SR-MPLS capability check is configured.

  4. Run commit

    The configuration is committed.

(Optional) Configuring a Policy for Triggering SR-MPLS BE LSP Establishment

A policy can be configured to allow the ingress node to establish SR-MPLS BE LSPs based on eligible routes.

Context

After SR is enabled, a large number of SR-MPLS BE LSPs will be established if policy control is not configured. As a result, resources are wasted. To prevent resource wastes, a policy for establishing SR-MPLS BE LSPs can be configured. The policy allows the ingress to use only allowed routes to establish SR-MPLS BE LSPs.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run ospf [ process-id ]

    The OSPF view is displayed.

  3. Run segment-routing lsp-trigger { none | host | ip-prefix ip-prefix-name }

    A policy is configured for the ingress to establish SR-MPLS BE LSPs.

    • host: Host IP routes with 32-bit masks are used as the policy for the ingress to establish SR-MPLS BE LSPs.

    • ip-prefix: FECs that match an IP prefix list are used as the policy for the ingress to establish SR-MPLS BE LSPs.

    • none: The ingress is not allowed to establish SR-MPLS BE LSPs.

  4. Run commit

    The configuration is committed.

(Optional) Configuring a Policy to Prioritize SR-MPLS BE Tunnels

You can configure a policy to prioritize SR-MPLS BE tunnels so that they can be preferentially selected.

Context

In a tunnel recursion scenario, an LDP tunnel is preferentially selected to forward traffic by default. To enable a device to preferentially select an SR-MPLS BE tunnel, improve the SR-MPLS BE tunnel priority so that the SR-MPLS BE tunnel takes preference over the LDP tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    The SR view is displayed.

  3. Run tunnel-prefer segment-routing

    SR-MPLS BE tunnels are configured to take precedence over LDP tunnels.

  4. Run commit

    The configuration is committed.

Verifying the OSPF SR-MPLS BE Tunnel Configuration

After successfully configuring SR-MPLS BE, verify SR-MPLS BE configurations.

Prerequisites

The SR-MPLS BE functions have been configured.

Procedure

After completing the configurations, you can run the following commands to check the configurations.

  • Run the display segment-routing prefix mpls forwarding command to view the information about the label forwarding information base for Segment Routing.

Configuring Traffic Steering into SR-MPLS BE Tunnels

Traffic steering into SR-MPLS BE tunnels is to recurse routes to SR-MPLS BE tunnels and forward data using these tunnels.

Usage Scenario

After an SR-MPLS BE tunnel is configured, traffic needs to be steered into the tunnel for forwarding. This process is called traffic steering. Currently, SR-MPLS BE tunnels can be used for various routes and services, such as BGP and static routes as well as BGP4+ 6PE, BGP L3VPN and EVPN services. The main traffic steering modes supported by SR-MPLS BE tunnels are as follows:
  • Static route mode: When configuring a static route, set the next hop of the route to the destination address of an SR-MPLS BE tunnel. This ensures that traffic transmitted over the route can be steered into the SR-MPLS BE tunnel. For details about how to configure a static route, see Creating IPv4 Static Routes.
  • Tunnel policy mode: This mode is implemented through tunnel selector configuration. It allows VPN service routes and non-labeled public network routes to recurse to SR-MPLS BE tunnels. The configuration varies according to the service type.

This section describes how to configure routes and services to recurse to SR-MPLS BE tunnels through tunnel policies.

Pre-configuration Tasks

Before configuring traffic steering into SR-MPLS BE tunnels, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, and EVPN services correctly.

  • Configure a filter, such as an IP prefix list, if you want to restrict the route recursive to a specified SR-MPLS BE tunnel.

Procedure

  1. Configure a tunnel policy.

    Select either of the following modes based on the traffic steering mode you select.

    Comparatively speaking, the tunnel selection sequence mode applies to all scenarios, and the tunnel selector mode applies to inter-AS VPN Option B and inter-AS VPN Option C scenarios.

    • Tunnel selection sequence

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      4. Run tunnel select-seq sr-lsp load-balance-number load-balance-number [ unmix ]

        The tunnel selection sequence and number of tunnels for load balancing are configured.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Return to the system view.

    • Tunnel selector

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-selector tunnel-selector-name { permit | deny } node node

        A tunnel selector is created and the view of the tunnel selector is displayed.

      3. (Optional) Configure if-match clauses.

      4. Run apply tunnel-policy tunnel-policy-name

        A specified tunnel policy is applied to the tunnel selector.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Return to the system view.

  2. Configure routes and services to recurse to SR-MPLS BE tunnels.
    • Configure non-labeled public BGP routes and static routes to recurse to SR-MPLS BE tunnels.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse non-labeled public BGP routes and static routes to SR-MPLS BE tunnels is enabled.

      2. Run commit

        The configuration is committed.

    • Configure non-labeled public BGP routes to recurse to SR-MPLS BE tunnels.

      For details about how to configure a non-labeled public BGP route, see Configuring Basic BGP Functions.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run unicast-route recursive-lookup tunnel [ tunnel-selector tunnel-selector-name ]

        The function to recurse non-labeled public BGP routes to SR-MPLS BE tunnels is enabled.

        The unicast-route recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      3. Run commit

        The configuration is committed.

    • Configure static routes to recurse to SR-MPLS BE tunnels.

      For details about how to configure a static route, see Creating IPv4 Static Routes.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse static routes to SR-MPLS BE tunnels for MPLS forwarding is enabled.

        The ip route-static recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      2. Run commit

        The configuration is committed.

    • Configure BGP L3VPN services to recurse to SR-MPLS BE tunnels.

      For details about how to configure a BGP L3VPN service, see BGP/MPLS IP VPN Configuration.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance-specific IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A specified tunnel policy is applied to the VPN instance-specific IPv4 address family.

      4. Run commit

        The configuration is committed.

    • Configure BGP L3VPNv6 services to recurse to SR-MPLS BE tunnels.

      For details about how to configure a BGP L3VPNv6 service, see BGP/MPLS IPv6 VPN Configuration.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance-specific IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A specified tunnel policy is applied to the VPN instance-specific IPv6 address family.

      4. Run commit

        The configuration is committed.

    • Configure BGP4+ 6PE services to recurse to SR-MPLS BE tunnels.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      Method 1: Apply a tunnel policy to a specified BGP4+ peer.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        A specified 6PE peer is enabled.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A specified tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

      Method 2: Apply a tunnel selector to all the routes of a specified BGP IPv6 unicast address family.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run unicast-route recursive-lookup tunnel-v4 [ tunnel-selector tunnel-selector-name ]

        The function to recurse non-labeled public BGP routes to SR-MPLS BE tunnels is enabled.

        The unicast-route recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      4. Run commit

        The configuration is committed.

    • Configure EVPN services to recurse to SR-MPLS BE tunnels.

      For details on how to configure EVPN, see EVPN Configuration. The configuration varies according to the service type.

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A specified tunnel policy is applied to the EVPN L3VPN instance.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the BD EVPN instance.

      3. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of a specified EVPN instance that works in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the EVPN instance that works in EVPN VPWS mode.

      3. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Configuring IS-IS SR to Communicate with LDP

This section describes how to configure IS-IS SR to communicate with LDP.

Usage Scenario

The SR and LDP interworking technique allows both segment routing and LDP to work within the same network. This technique connects an SR network to an LDP network to implement MPLS forwarding.

On the network shown in Figure 1-2647, an SR domain is created between PE1 that functions as a mapping client and the P device that functions as a mapping server. Mappings between prefixes and SIDs need to be configured on the P device and advertised to PE1, which receives the mappings. An LDP domain lies between the P device and PE2, which supports only LDP. To enable PE1 and PE2 to communicate with each other, you need to establish an SR LSP and an LDP LSP, and establish the mapping between the SR LSP and LDP LSP on the P device.

Figure 1-2647 Communication between SR and LDP

Pre-configuration Tasks

Before you configure IS-IS SR to communicate with LDP, complete the following tasks:

Procedure

  • Configure the mapping server.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      Segment routing is globally enabled, and the Segment Routing view is displayed.

    3. Run mapping-server prefix-sid-mapping ip-address mask-length begin-value [ range range-value ] [ attached ]

      Mapping between the prefix and SID is configured.

    4. Run quit

      Exist the SR view.

    5. Run isis [ process-id ]

      The IS-IS view is displayed.

    6. Run segment-routing mapping-server send

      The local node is enabled to advertise the local SID label mapping.

    7. (Optional) Run segment-routing mapping-server receive

      The local node is enabled to receive SID label mapping messages.

    8. Run commit

      The configuration is committed.

  • (Optional) Configure the mapping client.

    A device that is not configured as a mapping server functions as a mapping client by default.

    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run segment-routing mapping-server receive

      The local node is enabled to receive SID label mapping messages.

    4. Run commit

      The configuration is committed.

  • Configure the device connecting the LDP domain and the SR domain.
    1. Run system-view

      The system view is displayed.

    2. Run mpls

      The MPLS view is displayed.

    3. Run lsp-trigger segment-routing-interworking best-effort host

      A policy for triggering backup LDP LSP establishment is configured.

    4. Run commit

      The configuration is committed.

Checking the Configurations

After completing the configurations, run the display segment-routing prefix mpls forwarding command to check the label forwarding information base of Segment Routing.

Configuring OSPF SR to Communicate with LDP

This section describes how to configure OSPF SR to communicate with LDP.

Usage Scenario

The SR and LDP interworking technique allows both segment routing and LDP to work within the same network. This technique connects an SR network to an LDP network to implement MPLS forwarding.

On the network shown in Figure 1-2648, an SR domain is created between PE1 that functions as a mapping client and the P device that functions as a mapping server. Mappings between prefixes and SIDs need to be configured on the P device and advertised to PE1, which receives the mappings. An LDP domain lies between the P device and PE2, which supports only LDP. To enable PE1 and PE2 to communicate with each other, you need to establish an SR LSP and an LDP LSP, and establish the mapping between the SR LSP and LDP LSP on the P device.

Figure 1-2648 Communication between SR and LDP

Pre-configuration Tasks

Before you configure OSPF SR to communicate with LDP, complete the following tasks:

Procedure

  • Configure the mapping server.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      Segment routing is globally enabled, and the Segment Routing view is displayed.

    3. Run mapping-server prefix-sid-mapping ip-address mask-length begin-value [ range range-value ] [ attached ]

      Mapping between the prefix and SID is configured.

    4. Run quit

      Exist the SR view.

    5. Run ospf [ process-id ]

      The OSPF view is displayed.

    6. Run segment-routing mapping-server send

      The local node is enabled to advertise the local SID label mapping.

    7. (Optional) Run segment-routing mapping-server receive

      The local node is enabled to receive SID label mapping messages.

    8. Run commit

      The configuration is committed.

  • (Optional) Configure the mapping client.

    A device that is not configured as a mapping server functions as a mapping client by default.

    1. Run system-view

      The system view is displayed.

    2. Run ospf [ process-id ]

      The OSPF view is displayed.

    3. Run segment-routing mapping-server receive

      The local node is enabled to receive SID label mapping messages.

    4. Run commit

      The configuration is committed.

  • Configure the device connecting the LDP domain and the SR domain.
    1. Run system-view

      The system view is displayed.

    2. Run mpls

      The MPLS view is displayed.

    3. Run lsp-trigger segment-routing-interworking best-effort host

      A policy for triggering backup LDP LSP establishment is configured.

    4. Run commit

      The configuration is committed.

Checking the Configurations

After completing the configurations, run the display segment-routing prefix mpls forwarding command to check the label forwarding information base of Segment Routing.

Upgrading LDP to SR-MPLS BE

This section describes how to upgrade LDP to SR-MPLS BE.

Prerequisites

All devices involved in the upgrade run LDP, and traffic has recursed to the LDP LSP between the ingress and egress.

By default, the priority of an LDP LSP is higher than that of an SR-MPLS BE LSP (SR LSP for short).

Context

MPLS LDP is widely deployed on existing networks to carry services such as BGP/MPLS IP VPN services. The SR-MPLS technology, which is designed based on the source routing concept, simplifies network protocols and supports efficient TI-LFA protection technologies, facilitating smooth evolution to SDN networks. Figure 1-2649 shows the recommended process of upgrading LDP to SR-MPLS BE.

Figure 1-2649 Process of upgrading LDP to SR-MPLS BE

Procedure

  1. Upgrade all devices to support SR, specify an SRGB, and configure prefix SIDs for desired loopback interface addresses. For configuration details, see "Configuring Basic SR-MPLS BE Functions" in Configuring an IS-IS SR-MPLS BE Tunnel or Configuring an OSPF SR-MPLS BE Tunnel.

    In this case, although traffic still recurses to the LDP LSP, an SR LSP has already been generated.

  2. Check whether SR LSP entries are correctly generated on the devices.

    1. Run the display segment-routing prefix mpls forwarding command to check information about the SR label forwarding table.
    2. Run the ping lsp command to check end-to-end SR LSP connectivity.

  3. Run the tunnel-prefer segment-routing command to configure the SR LSP to take precedence over the LDP LSP.

    1. Run this command on PE1. Then the traffic whose ingress is PE1 recurses to the SR LSP, whereas the traffic whose ingress is PE2 still recurses to the LDP LSP.
    2. Run this command on PE2. Then the traffic whose ingress is PE2 also recurses to the SR LSP.

    During the configuration, check whether traffic is normal. If the traffic is abnormal, you can run the undo tunnel-prefer segment-routing command to perform a rollback so that the traffic continues to be carried over the LDP LSP.

  4. Run the undo mpls ldp command on each device to delete the LDP configuration.

    In this case, only the SR LSP (not the LDP LSP) exists on the network, meaning that the upgrade process is complete.

Configuring SR-MPLS BE over RSVP-TE

SR-MPLS BE over RSVP-TE applies to scenarios where the core area of a network supports RSVP-TE but the edge areas of the network use SR-MPLS BE. With this function configured, an RSVP-TE tunnel can be considered as a hop of an SR LSP.

Usage Scenario

SR-MPLS BE over RSVP-TE enables an SR-MPLS LSP to span an RSVP-TE area for the specified VPN server to use. On a network with VPN services, carriers find it difficult to deploy TE on the entire network in order to implement MPLS traffic engineering. To address this issue, carriers can plan a core TE area and then deploy SR-MPLS BE on edge PEs in this area.

Pre-configuration Tasks

Before configuring SR-MPLS BE over RSVP-TE, complete the following tasks:

  • Configure IGP to ensure connectivity between LSRs at the network layer.

  • Configure basic MPLS functions on nodes and interfaces.

  • Enable SR-MPLS on the edge devices of the TE area and the interfaces outside the TE area.

  • Establish an RSVP-TE tunnel between TE nodes.

  • Configure tunnel IP addresses.

  • Configure virtual TE interfaces.

Configuring IGP Shortcut

After IGP shortcut is configured on the ingress of an RSVP-TE tunnel, the associated LSP is not advertised to or used by neighbors.

Context

During path calculation in a scenario where IGP shortcut is configured, the device calculates an SPF tree based on the paths in the IGP physical topology, and then finds the SPF nodes on which shortcut tunnels are configured. If the metric of an RSVP-TE tunnel is smaller than that of an SPF node, the device replaces the outbound interfaces of the routes to this SPF node and those of the other routes passing through the SPF node with the RSVP-TE tunnel interface.

IGP shortcut and forwarding adjacency cannot be both configured.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    The tunnel interface view of the specified MPLS TE tunnel is displayed.

  3. Run mpls te igp shortcut isis [ hold-time interval ]

    IGP shortcut is configured.

    hold-time interval specifies the period after which IGP responds to the Down status of a TE tunnel. If a TE tunnel goes Down but this parameter is not specified, routes are recalculated. If this parameter is specified, IGP responds to the Down status of the TE tunnel only when the delay period expires. Given that no other conditions trigger route recalculation during this delay, one of the following situations occurs:
    • If the TE tunnel goes Up after the delay, routes are not recalculated.
    • If the TE tunnel remains Down after the delay, routes are recalculated.

  4. Run mpls te igp metric { absolute | relative } value

    An IGP metric is configured for the TE tunnel.

    When specifying a metric for the TE tunnel in path calculation through IGP shortcut, pay attention to the following points:

    • If the absolute parameter is configured, the metric of the TE tunnel is equal to the configured metric.

    • If the relative parameter is configured, the metric of the TE tunnel is the sum of the metric of the corresponding IGP path and the relative metric.

  5. For IS-IS, run isis enable [ process-id ]

    The IS-IS process is enabled for the tunnel interface.

  6. Run commit

    The configuration is committed.

Follow-up Procedure

If a network fault occurs, IGP re-convergence is performed. In this case, a transient forwarding status inconsistency may occur among nodes because of their different convergence rates, posing the risk of microloops. To avoid microloops, perform the following steps:

Before configuring microloop avoidance, configure CR-LSP backup parameters.

  • For IS-IS:
    1. Run the system-view command to enter the system view.
    2. Run the isis [ process-id ] command to create an IS-IS process and enter its view.
    3. Run the avoid-microloop te-tunnel command to enable microloop avoidance for the IS-IS TE tunnel.
    4. (Optional) Run the avoid-microloop te-tunnel rib-update-delay rib-update-delay command to configure a delay in delivering IS-IS routes whose outbound interface is a TE tunnel interface.
    5. Run the commit command to commit the configuration.

Configuring the Forwarding Adjacency Function

Configure the forwarding adjacency function on the ingress of an RSVP-TE tunnel so that the ingress can advertise an LSP to neighbors for use.

Context

Routing protocols need to perform bidirectional checks on links. After the forwarding adjacency function is enabled for an RSVP-TE tunnel, you also need to configure a reverse RSVP-TE tunnel and enable this function for it.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    The tunnel interface view of the specified MPLS TE tunnel is displayed.

  3. Run mpls te igp advertise [ hold-time interval | include-ipv6-isis ] *

    The forwarding adjacency function is configured.

    If IPv6 IS-IS is used, you need to configure the include-ipv6-isis parameter.

  4. Configure an IGP metric for the TE tunnel.
    • For IS-IS, run the mpls te igp metric absolute value command.

    Configure suitable IGP metrics for RSVP-TE tunnels to ensure that LSPs are correctly advertised and used. The metric of an RSVP-TE tunnel should be smaller than that of an IGP route that is not expected to be used.

  5. Enable the forwarding adjacency function.

    • For IS-IS, run the isis enable [ process-id ] command to enable the IS-IS process for the tunnel interface.

  6. Run commit

    The configuration is committed.

Enabling SR-MPLS BE over RSVP-TE

Enable SR-MPLS route recursion to RSVP-TE tunnels, thereby enabling SR-MPLS BE over RSVP-TE.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run isis [ process-id ]

    The IS-IS view is displayed.

  3. Run segment-routing mpls over rsvp-te

    SR-MPLS route recursion to RSVP-TE tunnels is enabled.

  4. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring SR-MPLS BE over RSVP-TE, check tunnel information on the ingress of the involved SR LSP.

Prerequisites

SR-MPLS BE over RSVP-TE has been configured.

Procedure

Run the display segment-routing prefix mpls forwarding command to check information about the SR label forwarding table and determine whether the outbound interface of the SR LSP is an RSVP-TE tunnel interface.

Configuring BFD for SR-MPLS BE

BFD for SR LSP can be configured to detect faults of SR LSPs.

Usage Scenario

Segment Routing-MPLS Best Effort (SR-MPLS BE) refers to the use of an IGP to compute an optimal SR LSP based on the shortest path first algorithm. An SR LSP is a label forwarding path that is established using SR and guides data packet forwarding through a prefix or node SID.

BFD for SR-MPLS BE is used to detect SR LSP connectivity. If BFD for SR-MPLS BE detects a fault on the primary SR LSP, an application such as VPN FRR rapidly switches traffic, which minimizes the impact on services.

Pre-configuration Tasks

Before configuring BFD for SR-MPLS BE, complete the following tasks:

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bfd

    The BFD view is displayed.

  3. Run mpls-passive

    The egress is enabled to create a BFD session passively.

    The egress has to receive an LSP ping request carrying a BFD TLV before creating a BFD session.

  4. Run quit

    Return to the system view.

  5. Run segment-routing

    The Segment Routing view is displayed.

  6. Run bfd enable mode tunnel [ filter-policy ip-prefix ip-prefix-name | effect-sr-lsp ] *

    BFD is enabled for SR LSPs.

    If effect-sr-lsp is specified and BFD goes down, the SR LSPs are withdrawn.

  7. (Optional) Run bfd tunnel { min-rx-interval receive-interval | min-tx-interval transmit-interval | detect-multiplier multiplier-value } *

    BFD parameters are set for SR LSPs.

  8. Run commit

    The configuration is committed.

Verifying the Configuration

After successfully configuring BFD for SR-MPLS BE, run the display segment-routing bfd tunnel session [ prefix ip-address { mask | mask-length } ] command to check information about the BFD session that monitors SR LSPs.

Configuring BFD for SR-MPLS BE (SR and LDP Interworking Scenario)

This section describes how to use BFD to detect an LSP fault in an SR+LDP interworking scenario.

Usage Scenario

SR and LDP interworking enables SR and LDP to work together on the same network. SR-MPLS BE and LDP networks can be connected through this technology to implement MPLS forwarding.

If BFD for SR-MPLS BE detects a fault on the primary tunnel when SR communicates with LDP, an application, such as VPN FRR, rapidly switches traffic to minimize the impact on traffic.

Pre-configuration Tasks

Before configuring BFD for SR-MPLS BE (in the SR and LDP interworking scenario), complete the following tasks:

Procedure

  • Create a BFD session on the SR side.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      The BFD view is displayed.

    3. Run mpls-passive

      The egress is enabled to create a BFD session passively.

      The egress has to receive an LSP ping request carrying a BFD TLV before creating a BFD session.

    4. Run quit

      Return to the system view.

    5. Run segment-routing

      The Segment Routing view is displayed.

    6. Run bfd enable mode tunnel [ filter-policy ip-prefix ip-prefix-name | effect-sr-lsp | nil-fec ] *

      BFD is enabled for SR LSPs.

      If effect-sr-lsp is specified and BFD goes down, the SR LSPs are withdrawn.

      In an SR and LDP interworking scenario, the ingress node cannot detect whether LDP LSPs are stitched to SR LSPs in the LDP to SR direction. In the LSP ping packet triggered by BFD, the encapsulated FEC type is LDP. When the packet arrives at the egress node (SR node), the FEC type fails to be verified, preventing BFD from going Up. To resolve this issue, configure the nil-fec parameter.

    7. (Optional) Run bfd tunnel { min-rx-interval receive-interval | min-tx-interval transmit-interval | detect-multiplier multiplier-value } *

      BFD parameters are set for SR tunnels.

    8. Run commit

      The configuration is committed.

  • Create a BFD session on the LDP side.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      The BFD view is displayed.

    3. Run mpls-passive

      The egress is enabled to create a BFD session passively.

      The egress has to receive an LSP ping request carrying a BFD TLV before creating a BFD session.

    4. Run quit

      Return to the system view.

    5. Run mpls

      The MPLS view is displayed.

    6. Run mpls bfd enable

      An MPLS device to dynamically establish a BFD session.

    7. Run mpls bfd-trigger host

      Host IP routes are used as the policy for triggering LDP BFD sessions.

    8. Run commit

      The configuration is committed.

  • Configure a device that connects the LDP area to the SR area.
    1. Run system-view

      The system view is displayed.

    2. Run mpls

      The MPLS view is displayed.

    3. Run lsp-trigger segment-routing-interworking best-effort host

      A device is enabled to stitch SR LSPs to the proxy egress LSPs and transit LSPs that are established over non-local host routes with a 32-bit mask.

    4. Run commit

      The configuration is committed.

Verifying the Configuration

After successfully configuring BFD for SR-MPLS BE, run the display segment-routing bfd tunnel session [ prefix ip-address { mask | mask-length } ] command to check information about the BFD session that monitors SR LSPs.

Configuring IS-IS SR-MPLS Flex-Algo

Flex-Algo allows an IGP to automatically calculate eligible paths based on the link cost, delay, or TE constraint to flexibly meet TE requirements.

Usage Scenario

Traditionally, IGPs can use only the SPF algorithm to calculate the shortest paths to destination addresses based on link costs. As the SPF algorithm is fixed and cannot be adjusted by users, the optimal paths cannot be calculated according to users' diverse requirements, such as the requirement for traffic forwarding along the lowest-delay path or without passing through certain links.

On a network, constraints used for path calculation may be different. For example, as autonomous driving requires an ultra-low delay, an IGP needs to use delay as the constraint to calculate paths on such a network. Another constraint that needs to be considered is cost, so some links with high costs need to be excluded in path calculation. These constraints may also be combined.

To make path calculation more flexible, users may want to customize IGP route calculation algorithms to meet their varying requirements. They can define an algorithm value to identify a fixed algorithm. When all devices on a network use the same algorithm, their calculation results are also the same, preventing loops. Since users, not standards organizations, are the ones to define these algorithms, they are called Flex-Algos.

When SR-MPLS uses an IGP to calculate paths, prefix SIDs can be associated with Flex-Algos to calculate SR-MPLS BE tunnels that meet different requirements.

Pre-configuration Tasks

Before configuring IS-IS SR-MPLS Flex-Algo, complete the following tasks:

Configuring Flex-Algo Link Attributes

Flex-Algo link attributes are used by the IGP to calculate Flex-Algo-based SR-MPLS BE tunnels (Flex-Algo LSPs). The IGP selects Flex-Algo links based on the corresponding FAD.

Context

Either of the following methods can be used to configure Flex-Algo link attributes:

  1. Inherit interface TE attributes.
  2. Configure Flex-Algo link attributes separately.

Procedure

  1. Configure the mappings between affinity bit names and values for links.
    1. Run system-view

      The system view is displayed.

    2. Run te attribute enable

      TE is enabled.

    3. Run path-constraint affinity-mapping

      An affinity name template is configured, and the affinity mapping view is displayed.

      This template must be configured on each node that is involved in path computation, and the same mappings between affinity bit values and names must be configured on each node.

    4. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      An affinity attribute has a total of 128 bits. This step configures only one bit. Repeat this step to configure more bits. You can configure some or all of the bits in the affinity attribute.

    5. Run quit

      Return to the system view.

  2. Configure Flex-Algo link attributes.

    Use either of the following methods:

    • Inherit interface TE attributes.
      1. Run interface interface-type interface-number

        The interface view is displayed.

      2. Run te link administrative group name bit-name &<1-32>

        The link administrative group attribute is configured. The bit-name value must be in the range specified in the affinity name template.

      3. Run te metric metric-value

        The TE metric of the link is configured.

      4. Run te link-attribute-application flex-algo

        The Flex-Algo link attribute application view is created and displayed.

      5. Run te inherit

        The interface TE attributes are inherited.

        After the te inherit command is run, the Flex-Algo link inherits the te metric and te link administrative group name command configurations on the interface.

        In the Flex-Algo link attribute application view, the te inherit command is mutually exclusive from the metric and link administrative group name commands.

    • Configure Flex-Algo link attributes separately.
      1. Run interface interface-type interface-number

        The interface view is displayed.

      2. Run te link-attribute-application flex-algo

        The Flex-Algo link attribute application view is created and displayed.

      3. Run link administrative group name name-string &<1-128>

        The link administrative group attribute of the Flex-Algo link is configured. The name-string value must be in the range specified in the affinity name template.

      4. (Optional) Run delay delay-value

        The Flex-Algo link delay is configured.

      5. (Optional) Run metric metric-value

        The Flex-Algo link metric is configured.

  3. Run commit

    The configuration is committed.

Configuring FADs

After a Flex-Algo is defined by a user based on service requirements, the IGP can use this Flex-Algo to calculate paths that meet the requirements.

Context

You can select one or two devices in the same IGP domain to configure flexible algorithm definitions (FADs). To improve reliability, you are advised to select two devices. A FAD is generally represented by a 3-tuple: Metric-Type, Calc-Type, and Constraints.

Because each user can individually define their Flex-Algos, the same Flex-Algo running on devices in the same IGP domain may have different FADs. To ensure that the path calculated using a specific Flex-Algo is loop-free, one FAD must be preferentially selected and advertised in the domain.

The detailed selection rules are as follows:

  1. In the advertisement domain of FADs, the FAD with the highest priority is preferred.

  2. If the FADs advertised in an IS-IS domain have the same priority, the FAD generated by the device with the largest system ID is preferred.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run flex-algo identifier flexAlgoId

    A Flex-Algo is created, and the Flex-Algo view is displayed.

  3. Run priority priority-value

    The priority of the Flex-Algo is configured.

    A larger value indicates a higher priority.

  4. Run metric-type { igp | delay | te }

    The Metric-Type of the Flex-Algo is configured.

    After this command is configured, links must have the corresponding metric type. Otherwise, these links will be pruned and cannot participate in IGP path calculation.

  5. Run affinity { include-all | include-any | exclude } { name-string } &<1-32>

    The affinity attribute of the Flex-Algo is configured.

    A FAD can constrain a path to include or exclude a link with a specific affinity attribute. The following three types of affinity attributes are defined for FADs:

    • Include-All Admin Group (include-all): A link is included in path calculation only if each link administrative group bit has the same name as the corresponding affinity bit.

    • Include-Any Admin Group (include-any): A link is included in path calculation if at least one link administrative group bit has the same name as an affinity bit.
    • Exclude Admin Group (exclude): A link is excluded from path calculation if any link administrative group bit has the same name as an affinity bit.

  6. Run commit

    The configuration is committed.

Configuring IS-IS Flex-Algo Functions

The Flex-Algos associated with the local prefix SIDs of a device as well as the FADs on the device all need to be advertised through IS-IS.

Context

After Flex-Algos are defined, each device can use an IGP to advertise their supported Flex-Algos and the specific calculation rules of a Flex-Algo through the FAD Sub-TLV.

These Flex-Algos can be associated with prefix SIDs during prefix SID configuration. The IGP can then advertise the Flex-Algos and prefix SIDs through the Prefix-SID Sub-TLV.

Procedure

  1. Enable SR-MPLS globally.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled, and the SR view is displayed.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  2. Enable IS-IS to advertise Flex-Algos.
    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run network-entity net-addr

      A network entity title (NET) is configured.

    4. Run cost-style { wide | compatible | wide-compatible }

      The IS-IS wide metric attribute is configured.

    5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]

      IS-IS TE is enabled.

    6. Run advertise link attributes

      The IS-IS process is enabled to advertise link attribute-related TLVs through LSPs. Link attributes include the IP address and interface index.

    7. (Optional) Run metric-delay advertisement enable [ level-1 | level-2 | level-1-2 ]

      Delay information advertisement is enabled.

      In a scenario where IS-IS Flex-Algos calculate paths based on delay information, you need to run this command to enable link delay advertisement through IS-IS.

    8. Run segment-routing mpls

      IS-IS SR-MPLS is enabled.

    9. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current IS-IS process.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    10. Run flex-algo flexAlgoIdentifier [ level-1 | level-2 | level-1-2 ]

      IS-IS is enabled to advertise Flex-Algos.

    11. (Optional) Run flex-algo prefix-sid incr-prefix-cost

      The device is enabled to count in the IS-IS cost of the loopback interface when calculating the cost of an IS-IS Flex-Algo SR-MPLS prefix SID route.

      By default, the device does not count in the IS-IS cost of the loopback interface when calculating the cost of an IS-IS Flex-Algo SR-MPLS prefix SID route in the same IS-IS level. After you run the flex-algo prefix-sid incr-prefix-cost command, the cost of the prefix SID route covers the IS-IS cost of the loopback interface, regardless of the metric type used in Flex-Algo calculation.

      When a Huawei device interworks with a non-Huawei device, you can run this command to prevent loops caused by the difference in default route calculation behaviors.

    12. Run commit

      The configuration is committed.

    13. Run quit

      Return to the system view.

  3. Associate a Flex-Algo with a prefix SID.
    1. Run system-view

      The system view is displayed.

    2. Run interface loopback loopback-number

      A loopback interface is created, and its view is displayed.

    3. Run isis enable [ process-id ]

      IS-IS is enabled on the loopback interface.

    4. Run ip address ip-address { mask | mask-length }

      An IP address is configured for the loopback interface.

    5. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ] flex-algo flex-algo-id

      A prefix SID is configured for the IP address of the loopback interface.

      The flex-algo flex-algo-id parameter allows you to associate the prefix SID with a specified Flex-Algo, so that IS-IS advertises the associated Flex-Algo when advertising the prefix SID.

    6. Run commit

      The configuration is committed.

Configuring the Color Extended Community Attribute

The color extended community attribute is added to routes based on route-policies. This attribute can be associated with Flex-Algos.

Context

The route coloring process is as follows:

  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Procedure

  1. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause as a route-policy filter criterion. You can add or modify the Color Extended Community only for a route-policy that meets the filter criterion.

      For details about the configuration, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The Color extended community is configured.

    5. Run commit

      The configuration is committed.

  2. Apply the route-policy.
    • Apply the route-policy to a BGP IPv4 unicast peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family unicast

        The IPv4 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP4+ 6PE peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP4+ 6PE peer is created.

      4. Run ipv6-family unicast

        The IPv6 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP4+ 6PE import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP EVPN import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

Configuring the Mapping Between the Color Extended Community Attribute and Flex-Algo

Service routes can be associated with Flex-Algo-based SR-MPLS BE tunnels only after mappings between their color attributes and Flex-Algos are configured.

Context

After the mapping between the color attribute and Flex-Algo is created for a VPN route, the VPN route is recursed based on the corresponding tunnel policy. If the tunnel policy specifies a preferential use of Flex-Algo-based SR-MPLS BE tunnels, the VPN route is recursed to the target Flex-Algo-based SR-MPLS BE tunnel based on the next hop address and color attribute of the route; if the tunnel policy specifies a preferential use of common SR-MPLS tunnels, the VPN route is recursed to the target common SR-MPLS BE tunnel based on the next hop address of the route.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run flex-algo color-mapping

    The Flex-Algo-color mapping view is displayed.

  3. Run color index flex-algo flexAlgoId

    The mapping between the color attribute and Flex-Algo is configured.

  4. Run commit

    The configuration is committed.

Configuring Traffic Steering into a Flex-Algo LSP

The traffic steering configuration allows you to recurse routes to Flex-Algo-based SR-MPLS BE tunnels (also called Flex-Algo LSPs) for traffic forwarding.

Context

After a Flex-Algo LSP is configured, traffic needs to be steered into the LSP for forwarding. Currently, Flex-Algo LSPs can be used for various routes, such as BGP and static routes, as well as various services, such as BGP L3VPN, BGP4+ 6PE, and EVPN services. This section describes how to use tunnel policies to recurse a service to a Flex-Algo LSP.

Pre-configuration Tasks

Before configuring traffic steering, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, or EVPN services correctly.

  • Configure an IP prefix list and a tunnel policy to limit the number of routes that can recurse to a Flex-Algo LSP.

Procedure

  1. Configure a tunnel policy.
    1. Run system-view

      The system view is displayed.

    2. Run tunnel-policy policy-name

      A tunnel policy is created, and the tunnel policy view is displayed.

    3. (Optional) Run description description-information

      A description is configured for the tunnel policy.

    4. Run tunnel select-seq flex-algo-lsp load-balance-number load-balance-number [ unmix ]

      The tunnel selection sequence and number of tunnels for load balancing are configured.

    5. Run commit

      The configuration is committed.

    6. Run quit

      Exit the tunnel policy view.

  2. Configure services to recurse to Flex-Algo LSPs.
    • Configure non-labeled public BGP routes to recurse to Flex-Algo LSPs.

      For details about how to configure BGP, see Configuring Basic BGP Functions.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] tunnel-policy policy-name

        Non-labeled public BGP routes are configured to recurse to Flex-Algo LSPs.

      2. Run commit

        The configuration is committed.

    • Configure static routes to recurse to Flex-Algo LSPs.

      For details about how to configure a static route, see Configuring IPv4 Static Routes.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] tunnel-policy policy-name

        Static routes are configured to recurse to Flex-Algo LSPs for MPLS forwarding.

      2. Run commit

        The configuration is committed.

    • Configure a BGP L3VPN service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP L3VPN service, see Configuring a Basic BGP/MPLS IP VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv4 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPN service to be recursed to a Flex-Algo LSP. If a remote VPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

    • Configure a BGP L3VPNv6 service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP L3VPNv6 service, see Configuring a Basic BGP/MPLS IPv6 VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv6 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPNv6 service to be recursed to a Flex-Algo LSP. If a remote VPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

    • Configure a BGP4+ 6PE service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP-IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        The local device is enabled to exchange routes with a specified 6PE peer in the address family view.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

    • Configure an EVPN service to recurse to a Flex-Algo LSP.

      For details about how to configure an EVPN service, see Configuring EVPN VPLS over MPLS (BD EVPN Instance).

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A tunnel policy is applied to the EVPN L3VPN instance.

      4. (Optional) Run default-color color-value evpn

        The default color value is specified for the EVPN L3VPN service to be recursed to a Flex-Algo LSP.

        If a remote EVPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the BD EVPN instance.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to be recursed to a Flex-Algo LSP. If a remote EVPN route without the color extended community attribute is leaked to a local EVPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of the EVPN instance in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the EVPN instance in EVPN VPWS mode.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to be recursed to a Flex-Algo LSP. If a remote EVPN route without the color extended community attribute is leaked to a local EVPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Verifying the Configuration

After IS-IS SR-MPLS Flex-Algo is configured, you can view associated information.

Prerequisites

All IS-IS SR-MPLS Flex-Algo configurations are complete.

Procedure

  1. Run the display isis process-id flex-algo [ flex-algo-id ] [ level-1 | level-2 ] command to check the preferred FAD in the LSDB.
  2. Run the display isis lsdb [ { level-1 | level-2 } | verbose | { local | lsp-id | is-name symbolic-name } ] * [ process-id | vpn-instance vpn-instance-name ] command to check IS-IS LSDB information.
  3. Run the display isis process-id route ipv4 flex-algo [ flex-algo-id ] [ [ level-1 | level-2 ] | ip-address [ mask | mask-length ] ] * or display isis route [ process-id ] ipv4 flex-algo [ flex-algo-id ] [ [ level-1 | level-2 ] | ip-address [ mask | mask-length ] ] * command to check Flex-Algo-related IS-IS route information.
  4. Run the display isis [ process-id ] spf-tree [ systemid systemid ] flex-algo [ flex-algo-id ] [ [ level-1 | level-2 ] | verbose ] * command to check the SPF tree topology information of a specified Flex-Algo.
  5. Run the display segment-routing prefix mpls forwarding flex-algo [ flexAlgoId ] [ ip-prefix ip-prefix mask-length | label label ] [ verbose ] command to check the Flex-Algo-based SR label forwarding table.
  6. Run the display segment-routing state ip-prefix ip-prefix mask-length flex-algo flexAlgoId command to check the SR status of a specified Flex-Algo based on a specified IP prefix.

Configuring OSPF SR-MPLS Flex-Algo

Flex-Algo allows an IGP to automatically calculate eligible paths based on the link cost, delay, or TE constraint to flexibly meet TE requirements.

Usage Scenario

Traditionally, IGPs can use only the SPF algorithm to calculate the shortest paths to destination addresses based on link costs. As the SPF algorithm is fixed and cannot be adjusted by users, the optimal paths cannot be calculated according to users' diverse requirements, such as the requirement for traffic forwarding along the lowest-delay path or without passing through certain links.

On a network, constraints used for path calculation may be different. For example, as autonomous driving requires an ultra-low delay, an IGP needs to use delay as the constraint to calculate paths on such a network. Another constraint that needs to be considered is cost, so some links with high costs need to be excluded in path calculation. These constraints may also be combined.

To make path calculation more flexible, users may want to customize IGP route calculation algorithms to meet their varying requirements. They can define an algorithm value to identify a fixed algorithm. When all devices on a network use the same algorithm, their calculation results are also the same, preventing loops. Since users, not standards organizations, are the ones to define these algorithms, they are called Flex-Algos.

When SR-MPLS uses an IGP to calculate paths, prefix SIDs can be associated with Flex-Algos to calculate SR-MPLS BE tunnels that meet different requirements.

Pre-configuration Tasks

Before configuring OSPF SR-MPLS Flex-Algo, complete the following tasks:

Configuring Flex-Algo Link Attributes

Flex-Algo link attributes are used by the IGP to calculate Flex-Algo-based SR-MPLS BE tunnels (Flex-Algo LSPs). The IGP selects Flex-Algo links based on the corresponding FAD.

Context

Either of the following methods can be used to configure Flex-Algo link attributes:

  1. Inherit interface TE attributes.
  2. Configure Flex-Algo link attributes separately.

Procedure

  1. Configure the mappings between affinity bit names and values for links.
    1. Run system-view

      The system view is displayed.

    2. Run te attribute enable

      TE is enabled.

    3. Run path-constraint affinity-mapping

      An affinity name template is configured, and the affinity mapping view is displayed.

      This template must be configured on each node that is involved in path computation, and the same mappings between affinity bit values and names must be configured on each node.

    4. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      An affinity attribute has a total of 128 bits. This step configures only one bit. Repeat this step to configure more bits. You can configure some or all of the bits in the affinity attribute.

    5. Run quit

      Return to the system view.

  2. Configure Flex-Algo link attributes.

    Use either of the following methods:

    • Inherit interface TE attributes.
      1. Run interface interface-type interface-number

        The interface view is displayed.

      2. Run te link administrative group name bit-name &<1-32>

        The link administrative group attribute is configured. The bit-name value must be in the range specified in the affinity name template.

      3. Run te metric metric-value

        The TE metric of the link is configured.

      4. Run te link-attribute-application flex-algo

        The Flex-Algo link attribute application view is created and displayed.

      5. Run te inherit

        The interface TE attributes are inherited.

        After the te inherit command is run, the Flex-Algo link inherits the te metric and te link administrative group name command configurations on the interface.

        In the Flex-Algo link attribute application view, the te inherit command is mutually exclusive from the metric and link administrative group name commands.

    • Configure Flex-Algo link attributes separately.
      1. Run interface interface-type interface-number

        The interface view is displayed.

      2. Run te link-attribute-application flex-algo

        The Flex-Algo link attribute application view is created and displayed.

      3. Run link administrative group name name-string &<1-128>

        The link administrative group attribute of the Flex-Algo link is configured. The name-string value must be in the range specified in the affinity name template.

      4. (Optional) Run delay delay-value

        The Flex-Algo link delay is configured.

      5. (Optional) Run metric metric-value

        The Flex-Algo link metric is configured.

  3. Run commit

    The configuration is committed.

Configuring FADs

After a Flex-Algo is defined by a user based on service requirements, an IGP can use this Flex-Algo to calculate paths that meet the requirements.

Context

You can select one or two devices in the same IGP domain to configure flexible algorithm definitions (FADs). To improve reliability, you are advised to select two devices. A FAD is generally represented by a 3-tuple: Metric-Type, Calc-Type, and Constraints.

Because each user can individually define their Flex-Algos, the same Flex-Algo running on devices in the same IGP domain may have different FADs. To ensure that the path calculated using a specific Flex-Algo is loop-free, one FAD must be preferentially selected and advertised in the domain.

The detailed selection rules are as follows:

  1. In the advertisement domain of FADs, the FAD with the highest priority is preferred.

  2. If the FADs advertised in an OSPF domain have the same priority, the FAD generated by the device with the largest router ID is preferred.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run flex-algo identifier flexAlgoId

    A Flex-Algo is created, and the Flex-Algo view is displayed.

  3. Run priority priority-value

    The priority of the Flex-Algo is configured.

    A larger value indicates a higher priority.

  4. Run metric-type { igp | delay | te }

    The Metric-Type of the Flex-Algo is configured.

    After this command is configured, links must have the corresponding metric type. Otherwise, these links will be pruned and cannot participate in IGP path calculation.

  5. Run affinity { include-all | include-any | exclude } { name-string } &<1-32>

    The affinity attribute of the Flex-Algo is configured.

    A FAD can constrain a path to include or exclude a link with a specific affinity attribute. The following three types of affinity attributes are defined for FADs:

    • Include-All Admin Group (include-all): A link is included in path calculation only if each link administrative group bit has the same name as the corresponding affinity bit.

    • Include-Any Admin Group (include-any): A link is included in path calculation if at least one link administrative group bit has the same name as an affinity bit.
    • Exclude Admin Group (exclude): A link is excluded from path calculation if any link administrative group bit has the same name as an affinity bit.

  6. Run commit

    The configuration is committed.

Configuring OSPF Flex-Algo Functions

The Flex-Algos associated with the local prefix SIDs of a device as well as the FADs on the device all need to be advertised through OSPF.

Context

Each device can use an IGP to advertise their supported Flex-Algos and the specific calculation rules of a Flex-Algo through the FAD Sub-TLV.

These Flex-Algos can be associated with prefix SIDs during prefix SID configuration. The IGP can then advertise the Flex-Algos and prefix SIDs through the Prefix-SID Sub-TLV.

Procedure

  1. Enable SR-MPLS globally.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      The SR view is displayed.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  2. Enable OSPF to advertise Flex-Algos.
    1. Run system-view

      The system view is displayed.

    2. Run ospf [ process-id ]

      The OSPF view is displayed.

    3. Run opaque-capability enable

      The Opaque LSA capability is enabled.

    4. Run segment-routing mpls

      OSPF SR-MPLS is enabled.

    5. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current OSPF process.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    6. Run flex-algo flexAlgoIdentifier

      The Flex-Algo advertisement capability is enabled for OSPF.

      The configuration in the OSPF view takes effect for all areas.

    7. Run advertise link-attributes application flex-algo

      The capability to advertise application-specific link attributes is configured.

      When Flex-Algo is used to compute paths based on constraints (such as the link delay, TE attributes, and affinity attributes) to meet different requirements, run the advertise link-attributes application flex-algo command so that the link attributes applied to Flex-Algo can be advertised through the Application-Specific Link Attributes (ASLA) sub-TLV in OSPF LSAs.

    8. Run area area-id

      The OSPF area view is displayed.

    9. Run flex-algo flexAlgoIdentifier [ disable ]

      The Flex-Algo advertisement capability is enabled for the OSPF area.

      The Flex-Algo advertisement capability can be configured in both the OSPF view and OSPF area view but the configuration in the OSPF area view takes precedence.

      • If this configuration exists both in the OSPF view and OSPF area view, the one in the OSPF area view takes effect preferentially.
      • If this configuration exists only in the OSPF view, the OSPF area inherits it in the OSPF view. If you do not want to advertise Flex-Algo in this area, specify the disable parameter.

    10. Run commit

      The configuration is committed.

    11. Run quit

      Return to the system view.

  3. Associate a Flex-Algo with a prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and its view is displayed.

    2. Run ospf enable [ process-id ] area area-id

      OSPF is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      An IP address is configured for the loopback interface.

    4. Run ospf prefix-sid { absolute sid-value | index index-value } flex-algo flex-algo-id [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the loopback interface.

      The flex-algo flex-algo-id parameter allows you to associate the prefix SID with a specified Flex-Algo, so that OSPF advertises the associated Flex-Algo when advertising the prefix SID.

    5. Run commit

      The configuration is committed.

Configuring the Color Extended Community Attribute

The color extended community attribute is added to routes based on route-policies. This attribute can be associated with Flex-Algos.

Context

The route coloring process is as follows:

  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Procedure

  1. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause as a route-policy filter criterion. You can add or modify the Color Extended Community only for a route-policy that meets the filter criterion.

      For details about the configuration, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The Color extended community is configured.

    5. Run commit

      The configuration is committed.

  2. Apply the route-policy.
    • Apply the route-policy to a BGP IPv4 unicast peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family unicast

        The IPv4 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP4+ 6PE peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP4+ 6PE peer is created.

      4. Run ipv6-family unicast

        The IPv6 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP4+ 6PE import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP EVPN import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

Configuring the Mapping Between the Color Extended Community Attribute and Flex-Algo

Service routes can be associated with Flex-Algo-based SR-MPLS BE tunnels only after mappings between their color attributes and Flex-Algos are configured.

Context

After the mapping between the color attribute and Flex-Algo is created for a VPN route, the VPN route is recursed based on the corresponding tunnel policy. If the tunnel policy specifies a preferential use of Flex-Algo-based SR-MPLS BE tunnels, the VPN route is recursed to the target Flex-Algo-based SR-MPLS BE tunnel based on the next hop address and color attribute of the route; if the tunnel policy specifies a preferential use of common SR-MPLS tunnels, the VPN route is recursed to the target common SR-MPLS BE tunnel based on the next hop address of the route.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run flex-algo color-mapping

    The Flex-Algo-color mapping view is displayed.

  3. Run color index flex-algo flexAlgoId

    The mapping between the color attribute and Flex-Algo is configured.

  4. Run commit

    The configuration is committed.

Configuring Traffic Steering into a Flex-Algo LSP

The traffic steering configuration allows you to recurse routes to Flex-Algo-based SR-MPLS BE tunnels (also called Flex-Algo LSPs) for traffic forwarding.

Context

After a Flex-Algo LSP is configured, traffic needs to be steered into the LSP for forwarding. Currently, Flex-Algo LSPs can be used for various routes, such as BGP and static routes, as well as various services, such as BGP L3VPN, BGP4+ 6PE, and EVPN services. This section describes how to use tunnel policies to recurse a service to a Flex-Algo LSP.

Pre-configuration Tasks

Before configuring traffic steering, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, or EVPN services correctly.

  • Configure an IP prefix list and a tunnel policy to limit the number of routes that can recurse to a Flex-Algo LSP.

Procedure

  1. Configure a tunnel policy.
    1. Run system-view

      The system view is displayed.

    2. Run tunnel-policy policy-name

      A tunnel policy is created, and the tunnel policy view is displayed.

    3. (Optional) Run description description-information

      A description is configured for the tunnel policy.

    4. Run tunnel select-seq flex-algo-lsp load-balance-number load-balance-number [ unmix ]

      The tunnel selection sequence and number of tunnels for load balancing are configured.

    5. Run commit

      The configuration is committed.

    6. Run quit

      Exit the tunnel policy view.

  2. Configure services to recurse to Flex-Algo LSPs.
    • Configure non-labeled public BGP routes to recurse to Flex-Algo LSPs.

      For details about how to configure BGP, see Configuring Basic BGP Functions.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] tunnel-policy policy-name

        Non-labeled public BGP routes are configured to recurse to Flex-Algo LSPs.

      2. Run commit

        The configuration is committed.

    • Configure static routes to recurse to Flex-Algo LSPs.

      For details about how to configure a static route, see Configuring IPv4 Static Routes.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] tunnel-policy policy-name

        Static routes are configured to recurse to Flex-Algo LSPs for MPLS forwarding.

      2. Run commit

        The configuration is committed.

    • Configure a BGP L3VPN service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP L3VPN service, see Configuring a Basic BGP/MPLS IP VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv4 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPN service to be recursed to a Flex-Algo LSP. If a remote VPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

    • Configure a BGP L3VPNv6 service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP L3VPNv6 service, see Configuring a Basic BGP/MPLS IPv6 VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv6 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPNv6 service to be recursed to a Flex-Algo LSP. If a remote VPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

    • Configure a BGP4+ 6PE service to recurse to a Flex-Algo LSP.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP-IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        The local device is enabled to exchange routes with a specified 6PE peer in the address family view.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

    • Configure an EVPN service to recurse to a Flex-Algo LSP.

      For details about how to configure an EVPN service, see Configuring EVPN VPLS over MPLS (BD EVPN Instance).

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A tunnel policy is applied to the EVPN L3VPN instance.

      4. (Optional) Run default-color color-value evpn

        The default color value is specified for the EVPN L3VPN service to be recursed to a Flex-Algo LSP.

        If a remote EVPN route without the color extended community attribute is leaked to a local VPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      5. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the BD EVPN instance.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to be recursed to a Flex-Algo LSP. If a remote EVPN route without the color extended community attribute is leaked to a local EVPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of the EVPN instance in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the EVPN instance in EVPN VPWS mode.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to be recursed to a Flex-Algo LSP. If a remote EVPN route without the color extended community attribute is leaked to a local EVPN instance, the default color value is used for the recursion to a Flex-Algo LSP.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Verifying the Configuration

After configuring OSPF SR-MPLS Flex-Algo, you can check relevant information.

Prerequisites

All OSPF SR-MPLS Flex-Algo configurations are complete.

Procedure

  1. Run the display ospf process-id flex-algo [ flexAlgoIdentifier ] [ area { area-id-integer | area-id-ipv4 } ] command to check the preferred FAD in the LSDB.
  2. Run the display ospf [ process-id ] topology [ area { area-id | area-ipv4 } ] flex-algo [ flex-algo-id ] command to check information about the topology used for OSPF Flex-Algo to calculate routes.
  3. Run the display segment-routing prefix mpls forwarding flex-algo [ flexAlgoId ] [ ip-prefix ip-prefix mask-length | label label ] [ verbose ] command to check the Flex-Algo-based SR label forwarding table.
  4. Run the display segment-routing state ip-prefix ip-prefix mask-length flex-algo flexAlgoId command to check the SR status of a specified Flex-Algo based on a specified IP prefix.

Configuring SBFD for SR-MPLS BE Tunnel

This section describes how to configure SBFD for SR-MPLS BE to detect SR-MPLS BE tunnel faults.

Usage Scenario

With SBFD for SR-MPLS BE, applications such as VPN FRR can be triggered to perform a fast traffic switching when the primary tunnel fails, minimizing the impact on services.

This configuration task applies to both common SR-MPLS BE tunnels and Flex-Algo-based SR-MPLS BE tunnels.

Pre-configuration Tasks

Before configuring SBFD for SR-MPLS BE tunnel, complete the following tasks:

  • Configure an SR-MPLS BE tunnel.

  • Run the mpls lsr-id lsr-id command to configure an LSR ID and ensure that the route from the peer to the local address specified using lsr-id is reachable.

Procedure

  • Configuring an SBFD Initiator
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is globally enabled.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. (Optional) Run destination ipv4 ip-address remote-discriminator discriminator-value

      The mapping between the SBFD reflector IP address and discriminator is configured.

      On the device functioning as an SBFD initiator, if the mapping between the SBFD reflector IP address and discriminator is configured using the destination ipv4 remote-discriminator command, the initiator uses the configured discriminator to negotiate with the reflector in order to establish an SBFD session. If such a mapping is not configured, the SBFD initiator uses the reflector IP address as a discriminator by default to complete the negotiation.

      This step is optional. If it is performed, the value of discriminator-value must be the same as that of unsigned-integer-value in the reflector discriminator command configured on the reflector.

    6. Run quit

      Return to the system view.

    7. Run segment-routing

      The Segment Routing view is displayed.

    8. Run seamless-bfd enable mode tunnel [ filter-policy ip-prefix ip-prefix-name | effect-sr-lsp ] *

      SBFD is enabled for the SR-MPLS tunnel.

    9. (Optional) Run seamless-bfd tunnel { min-rx-interval receive-interval | min-tx-interval transmit-interval | detect-multiplier multiplier-value } *

      SBFD parameters are set for the SR-MPLS tunnel.

    10. (Optional) Run seamless-bfd flex-algo exclude { flex-algo-begin [ to flex-algo-end ] } &<1-10>

      Flex-Algos that do not require SBFD session establishment are excluded.

      After the seamless-bfd enable command is run, SBFD sessions are established for all common SR-MPLS BE tunnels and Flex-Algo-based SR-MPLS BE tunnels. If some Flex-Algo-based SR-MPLS BE tunnels do not require SBFD session establishment, run the preceding command to exclude the corresponding Flex-Algos.

    11. (Optional) Configure an IS-IS SBFD source address.

      In IS-IS multi-process scenarios, you can configure source addresses for SBFD sessions in different IS-IS processes.

      By default, MPLS LSR IDs are used to create SBFD sessions. During SBFD deployment, only an LSR ID can be used as the source of an SBFD session, but the source belongs to only one IS-IS process. As a result, in the multi-process scenarios, LSR ID-based host routes must be imported in route import mode. Otherwise, SBFD cannot take effect. If IS-IS process isolation prevents route import, the device must support SBFD session establishment using different sources in different IS-IS processes.

      1. Run quit

        Return to the system view.

      2. Run isis [ process-id ]

        The IS-IS view is displayed.

      3. Run cost-style { wide | compatible | wide-compatible }

        The IS-IS wide metric is configured.

      4. Run segment-routing mpls

        IS-IS SR-MPLS is enabled.

      5. Run segment-routing sbfd source-address ip-address

        An SBFD source address is configured.

    12. Run commit

      The configuration is committed.

  • Configuring an SBFD Reflector
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is globally enabled.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. Run reflector discriminator { unsigned-integer-value | ip-address-value }

      A discriminator is configured for the SBFD reflector.

    6. Run commit

      The configuration is committed.

Verifying the Configuration

After successfully configuring SBFD for SR-MPLS BE tunnel, run the display segment-routing seamless-bfd tunnel session [ prefix ip-address [ mask | mask-length ] ] command to check information about the SBFD session that monitors the SR-MPLS BE tunnel.

Configuring an IS-IS SR-MPLS TE Tunnel (Path Computation on the Controller)

An SR-MPLS TE tunnel is configured on a forwarder. The forwarder delegates the tunnel to a controller. The controller generates labels and calculates a path.

Usage Scenario

Currently, each LSP of a TE tunnel is allocated with a label, and tunnel creation and LSP generation are both completed by forwarders. This increases the burden on forwarders and consumes a lot of forwarder resources. To address this issue, SR-MPLS TE can be used to manually create tunnels. This method offers the following benefits:

  • A controller generates labels, reducing the burden on forwarders.
  • The controller computes paths and allocates a label to each route, reducing both the burden and resource usage on the forwarders. As such, the forwarders can focus more on core forwarding tasks, improving forwarding performance.

Pre-configuration Tasks

Before configuring an SR-MPLS TE tunnel, complete the following tasks:

  • Configure IS-IS to implement network layer connectivity for LSRs.

A controller must be configured to generate labels and calculate an LSP path for an SR-MPLS TE tunnel to be established.

Enabling MPLS TE

Before configuring an SR-MPLS TE tunnel, you need to enable MPLS TE on each involved node in the SR-MPLS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the device.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.

    • LSRs do not have default IDs. LSR IDs must be manually configured.

    • Using a loopback interface address as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  5. Run mpls te

    MPLS TE is enabled globally.

  6. (Optional) Enable interface-specific MPLS TE.

    In scenarios where the controller or ingress performs path computation, interface-specific MPLS TE must be enabled. In static explicit path scenarios, this step can be ignored.

    1. Run quit

      The system view is displayed.

    2. Run interface interface-type interface-number

      The view of the interface on which the MPLS TE tunnel is established is displayed.

    3. Run mpls

      MPLS is enabled on an interface.

    4. Run mpls te

      MPLS TE is enabled on the interface.

  7. Run commit

    The configuration is committed.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Enabling SR Globally

Enabling SR globally on forwarders is a prerequisite for SR-MPLS TE tunnel configuration.

Context

SR must be enabled globally before the SR function becomes available.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    Segment Routing is enabled globally, and the Segment Routing view is displayed.

  3. (Optional) Run protected-adj-sid delete-delay delay-time

    A delay in deleting protected adjacency SIDs is configured.

    In SR-MPLS TE scenarios, the corresponding entry cannot be deleted immediately when a protected adjacency SID fails. Otherwise, the backup path becomes invalid. As such, a delay needs to be configured to perform delayed entry deletion.

  4. (Optional) Run protected-adj-sid update-delay delay-time

    A delay in delivering protected adjacency SIDs to the forwarding table is configured.

    The protected-adj-sid update-delay command mainly applies to scenarios where the outbound interface associated with a protected adjacency SID changes from Down to Up. For example, if a neighbor fails, the local interface connected to the neighbor goes Down, and the protected adjacency SID associated with the interface becomes invalid, causing traffic to be switched to the backup path for forwarding.

    If the neighbor recovers but route convergence has not completed yet, the neighbor may not have the forwarding capability. In this situation, you can run the protected-adj-sid update-delay command to configure a delay in delivering the protected adjacency SID to the forwarding table, preventing traffic from being forwarded to the neighbor. This helps avoid packet loss.

  5. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS TE Functions

This section describes how to configure basic SR-MPLS TE functions.

Context

SR-MPLS TE supports both strict and loose explicit paths. Strict explicit paths mainly use adjacency SIDs, whereas loose explicit paths use both adjacency and node SIDs. Adjacency and node SIDs must be generated before you configure an SR-MPLS TE tunnel.

Procedure

  1. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run network-entity net-addr

      A network entity title (NET) is configured.

    4. Run cost-style { wide | compatible | wide-compatible }

      The IS-IS wide metric function is enabled.

    5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]

      IS-IS TE is enabled.

    6. Run segment-routing mpls

      IS-IS SR-MPLS is enabled.

    7. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current IS-IS instance.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    8. (Optional) Run segment-routing auto-adj-sid protected

      The function of automatically generating dynamic protected adjacency SIDs is enabled.

    9. (Optional) Run segment-routing mpls static adj-sid advertise

      Advertisement of static adjacency SIDs is enabled.

      Typically, in SR-MPLS TE scenarios, adjacency SIDs are dynamically generated and advertised by IGP. Static adjacency SIDs are manually configured. To enable the device to advertise such SIDs, run the segment-routing mpls static adj-sid advertise command in the IS-IS view.

    10. Run commit

      The configuration is committed.

    11. Run quit

      Return to the system view.

  2. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the interface view is displayed.

    2. Run isis enable [ process-id ]

      IS-IS is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      An IP address is configured for the loopback interface.

    4. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the interface.

    5. Run commit

      The configuration is committed.

    6. Run quit

      Return to the system view.

  3. (Optional) Configure an adjacency SID.

    After IS-IS SR is enabled, an adjacency SID is automatically generated. To disable the automatic generation of adjacency SIDs, run the segment-routing auto-adj-sid disable command. Dynamically generated adjacency SIDs may change after a device restart. If an explicit path uses such an adjacency SID and the associated device is restarted, the adjacency SID needs to be reconfigured. You can also manually configure an adjacency SID to facilitate the use of an explicit path.

    1. Run segment-routing

      The SR view is displayed.

    2. Run ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value [ vpn-instance vpn-name ] or ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value protected

      A static SR adjacency SID is configured.

      To enable the device to steer traffic to a VPN link based on the static adjacency SID, specify the vpn-instance vpn-name parameter.

      To configure an adjacency SID carrying the protection flag so that it can be protected by another adjacency SID, specify the protected parameter.

    3. Run commit

      The configuration is committed.

Configuring IS-IS-based Topology Reporting

Before creating an SR-MPLS TE tunnel, enable IS-IS to report network topology information.

Context

Before an SR-MPLS TE tunnel is established, a forwarder must allocate labels, collect network topology information, and report that information to a controller so that the controller can use the information to compute a path and generate the corresponding label stack. SR-MPLS TE labels can be allocated by the controller or the extended IS-IS protocol on forwarders. Network topology information (including IS-IS-allocated labels) is collected by IS-IS and reported to the controller through BGP-LS.

IS-IS collects network topology information including the link cost, latency, and packet loss rate and advertises the information to BGP-LS, which then reports the information to a controller. The controller can compute an SR-MPLS TE tunnel based on link cost, latency, packet loss rate, and other factors to meet various service requirements.

Before the configuration, pay attention to the following points:

Procedure

  1. Configure IS-IS to advertise network topology information to BGP-LS.

    Perform the following steps on one or more nodes of the SR-MPLS TE tunnel:

    A forwarder can report network-wide topology information to the controller after they establish a BGP-LS peer relationship. The following steps can be configured on one or multiple nodes, depending on the network scale.

    1. Run isis [ process-id ]

      The IS-IS view is displayed.

    2. Run bgp-ls enable [ level-1 | level-2 | level-1-2 ]

      IS-IS is enabled to advertise network topology information to BGP-LS.

      To configure IS-IS to advertise the topology information of Level-1 areas and filter out the route prefixes leaked from Level-2 areas to Level-1 areas, run the bgp-ls enable level-1 level-2-leaking-route-ignore command.

    3. Run quit

      Return to the system view.

    4. Run commit

      The configuration is committed.

  2. Configure a BGP-LS peer relationship between the forwarder and controller so that the forwarder can report topology information to the controller through BGP-LS.
    1. Run bgp { as-number-plain | as-number-dot }

      BGP is enabled, and the BGP view is displayed.

    2. Run peer ipv4-address as-number as-number-plain

      A BGP peer is created.

    3. Run link-state-family unicast

      BGP-LS is enabled, and the BGP-LS address family view is displayed.

    4. Run peer { group-name | ipv4–address } enable

      The device is enabled to exchange BGP-LS routing information with a specified peer or peer group.

    5. Run commit

      The configuration is committed.

Configuring an SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    A tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure the IP address of the tunnel interface, run ip address ip-address { mask | mask-length } [ sub ]

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run ip address unnumbered interface interface-type interface-number

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. The LSR ID of the ingress is generally used as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address is configured, which is usually the LSR ID of the egress.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  9. Run mpls te pce delegate

    PCE server delegation is enabled so that the controller can calculate paths.

  10. (Optional) Run mpls te path verification enable

    Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using this label is automatically set to Down.

    This function does not need to be configured if the controller or BFD is used.

    To enable this function globally, run the mpls te path verification enable command in the MPLS view.

  11. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  12. Run commit

    The configuration is committed.

Configuring the UCMP Function of the SR-MPLS TE Tunnel

When multiple SR-MPLS TE tunnels are directed to the downstream device, the load balancing weights can be configured to perform the Unequal Cost Multi-Path Load Balance (UCMP) on the SR-MPLS TE tunnels.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run load-balance unequal-cost enable

    UCMP is enabled globally.

  3. Run interface tunnel tunnel-number

    The SR-MPLS TE tunnel interface view is displayed.

  4. Run load-balance unequal-cost weight weight

    A weight is set for an SR-MPLS TE tunnel before UCMP is carried out among tunnels.

  5. Run commit

    The configuration is committed.

(Optional) Configuring SR on a PCC

Configure SR on a PCE client (PCC), so that a controller can deliver path information to the PCC (forwarder) after path computation.

Context

SR is configured on a PCC (forwarder). Path computation is delegated to the controller. After the controller computes a path, the controller sends a PCEP message to deliver path information to the PCC.

The path information can also be delivered by a third-party adapter to the forwarder. In this situation, SR does not need to be configured on the PCC, and the associated step can be skipped.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run pce-client

    A PCE client is configured, and the PCE client view is displayed.

  3. Run capability segment-routing

    SR is enabled for the PCE client.

  4. Run connect-server ip-address

    A candidate PCE server is specified for the PCE client.

  5. Run commit

    The configuration is committed.

(Optional) Enabling a Device to Simulate an SR-MPLS TE Transit Node to Perform Adjacency Label-based Forwarding

An SR-MPLS TE-incapable device on an SR-MPLS TE network must be configured to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding.

Context

An SR-MPLS TE-incapable device on an SR-MPLS TE network can be configured to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding. The function resolves the forwarding issue on the SR-MPLS TE-incapable device.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run sr-te-simulate static-cr-lsp transit lsp-name incoming-interface interface-type interface-number sid segmentid outgoing-interface interface-type interface-number nexthop next-hop-address out-label implicit-null

    A device is enabled to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding.

    To modify parameters, except for lsp-name, run the sr-te-simulate static-cr-lsp transit command directly, without the need to run the undo sr-te-simulate static-cr-lsp transit command. This means that these parameters can be dynamically updated.

  3. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring an automatic SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The SR-MPLS TE tunnel functions have been configured.

Procedure

  • Run the following commands to check the IS-IS TE status:
    • display isis traffic-eng advertisements [ lspId | local ] [ level-1 | level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
    • display isis traffic-eng statistics [ process-id | vpn-instance vpn-instance-name ]
  • Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  • Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to check LSP statistics.
  • Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
  • (Optional) If the label stack depth exceeds the upper limit supported by a forwarder, the controller needs to divide a label stack into multiple stacks for an entire path. After the controller assigns a stick label to a stick node, run the display mpls te stitch-label-stack command to check information about the stick label stack mapped to the stick label.

Configuring an OSPF SR-MPLS TE Tunnel (Path Computation on the Controller)

An SR-MPLS TE tunnel is configured on a forwarder. The forwarder delegates the tunnel to a controller. The controller generates labels and calculates a path.

Usage Scenario

Currently, each LSP of a TE tunnel is allocated with a label, and tunnel creation and LSP generation are both completed by forwarders. This increases the burden on forwarders and consumes a lot of forwarder resources. To address this issue, SR-MPLS TE can be used to manually create tunnels. This method offers the following benefits:

  • A controller generates labels, reducing the burden on forwarders.
  • The controller computes paths and allocates a label to each route, reducing both the burden and resource usage on the forwarders. As such, the forwarders can focus more on core forwarding tasks, improving forwarding performance.

Pre-configuration Tasks

Before configuring an SR-MPLS TE tunnel, complete the following tasks:

  • Configure OSPF to implement LSR connectivity at the network layer.

A controller must be configured to generate labels and calculate an LSP path for an SR-MPLS TE tunnel to be established.

Enabling MPLS TE

Before configuring an SR-MPLS TE tunnel, you need to enable MPLS TE on each involved node in the SR-MPLS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the device.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.

    • LSRs do not have default IDs. LSR IDs must be manually configured.

    • Using a loopback interface address as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  5. Run mpls te

    MPLS TE is enabled globally.

  6. (Optional) Enable interface-specific MPLS TE.

    In scenarios where the controller or ingress performs path computation, interface-specific MPLS TE must be enabled. In static explicit path scenarios, this step can be ignored.

    1. Run quit

      The system view is displayed.

    2. Run interface interface-type interface-number

      The view of the interface on which the MPLS TE tunnel is established is displayed.

    3. Run mpls

      MPLS is enabled on an interface.

    4. Run mpls te

      MPLS TE is enabled on the interface.

  7. Run commit

    The configuration is committed.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Globally Enabling the SR Capability

Globally enabling the SR capability on forwarders is a prerequisite for configuring an SR-MPLS TE tunnel.

Context

SR must be enabled globally before the SR function becomes available.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally.

  3. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS TE Functions

This section describes how to configure basic SR-MPLS TE functions.

Context

SR-MPLS TE supports both strict and loose explicit paths. Strict explicit paths mainly use adjacency SIDs, whereas loose explicit paths use both adjacency and node SIDs. Adjacency and node SIDs must be generated before you configure an SR-MPLS TE tunnel.

Procedure

  1. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run ospf [ process-id ]

      The OSPF view is displayed.

    3. Run opaque-capability enable

      The Opaque LSA capability is enabled.

    4. Run segment-routing mpls

      OSPF SR-MPLS is enabled.

    5. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific OSPF SRGB range is configured.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    6. Run area area-id

      The OSPF area view is displayed.

    7. Run mpls-te enable [ standard-complying ]

      TE is enabled in the current OSPF area.

    8. Run commit

      The configuration is committed.

    9. Run quit

      Return to the system view.

  2. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the interface view is displayed.

    2. Run ospf enable [ process-id ] area area-id

      OSPF is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      An IP address is configured for the loopback interface.

    4. Run ospf prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the interface.

    5. Run commit

      The configuration is committed.

Configuring OSPF-based Topology Reporting

Before creating an SR-MPLS TE tunnel, enable OSPF to report network topology information.

Context

Before an SR-MPLS TE tunnel is established, a device must assign labels, collect network topology information, and report the information to the controller so that the controller uses the information to calculate a path and a label stack for the path. SR-MPLS TE labels can be assigned by the controller or the extended OSPF protocol on forwarders. Network topology information (including labels allocated by OSPF) is collected by OSPF and reported to the controller through BGP-LS.

OSPF collects network topology information including the link cost, latency, and packet loss rate and advertises the information to BGP-LS, which then reports the information to a controller. The controller can compute an SR-MPLS TE tunnel based on link cost, latency, and other factors to meet various service requirements.

Before the configuration, pay attention to the following points:

  • If the controller computes an SR-MPLS TE tunnel based on the link cost, no additional configuration is required.

  • If the controller computes an SR-MPLS TE tunnel based on the link latency, run the metric-delay advertisement enable command in the OSPF view to configure the latency advertisement function.

Procedure

  1. Configure OSPF to advertise network topology information to BGP-LS.

    Perform the following steps on one or more nodes of an SR-MPLS TE tunnel:

    A forwarder can report network-wide topology information to the controller after they establish a BGP-LS peer relationship. The following steps can be configured on one or multiple nodes, depending on the network scale.

    1. Run ospf [ process-id ]

      The OSPF view is displayed.

    2. Run bgp-ls enable

      OSPF is enabled to advertise network topology information to BGP-LS.

    3. Run quit

      Return to the system view.

    4. Run commit

      The configuration is committed.

  2. Configure a BGP-LS peer relationship between the forwarder and controller so that the forwarder can report topology information to the controller through BGP-LS.
    1. Run bgp { as-number-plain | as-number-dot }

      BGP is enabled, and the BGP view is displayed.

    2. Run peer ipv4-address as-number as-number-plain

      A BGP peer is created.

    3. Run link-state-family unicast

      BGP-LS is enabled, and the BGP-LS address family view is displayed.

    4. Run peer { group-name | ipv4–address } enable

      The device is enabled to exchange BGP-LS routing information with a specified peer or peer group.

    5. Run commit

      The configuration is committed.

Configuring an SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    A tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure the IP address of the tunnel interface, run ip address ip-address { mask | mask-length } [ sub ]

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run ip address unnumbered interface interface-type interface-number

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. The LSR ID of the ingress is generally used as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address is configured, which is usually the LSR ID of the egress.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  9. Run mpls te pce delegate

    PCE server delegation is enabled so that the controller can calculate paths.

  10. (Optional) Run mpls te path verification enable

    Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using this label is automatically set to Down.

    This function does not need to be configured if the controller or BFD is used.

    To enable this function globally, run the mpls te path verification enable command in the MPLS view.

  11. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  12. Run commit

    The configuration is committed.

Configuring the UCMP Function of the SR-MPLS TE Tunnel

When multiple SR-MPLS TE tunnels are directed to the downstream device, the load balancing weights can be configured to perform the Unequal Cost Multi-Path Load Balance (UCMP) on the SR-MPLS TE tunnels.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run load-balance unequal-cost enable

    UCMP is enabled globally.

  3. Run interface tunnel tunnel-number

    The SR-MPLS TE tunnel interface view is displayed.

  4. Run load-balance unequal-cost weight weight

    A weight is set for an SR-MPLS TE tunnel before UCMP is carried out among tunnels.

  5. Run commit

    The configuration is committed.

(Optional) Configuring SR on a PCC

Configure SR on a PCE client (PCC), so that a controller can deliver path information to the PCC (forwarder) after path computation.

Context

SR is configured on a PCC (forwarder). Path computation is delegated to the controller. After the controller computes a path, the controller sends a PCEP message to deliver path information to the PCC.

The path information can also be delivered by a third-party adapter to the forwarder. In this situation, SR does not need to be configured on the PCC, and the associated step can be skipped.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run pce-client

    A PCE client is configured, and the PCE client view is displayed.

  3. Run capability segment-routing

    SR is enabled for the PCE client.

  4. Run connect-server ip-address

    A candidate PCE server is specified for the PCE client.

  5. Run commit

    The configuration is committed.

(Optional) Enabling a Device to Simulate an SR-MPLS TE Transit Node to Perform Adjacency Label-based Forwarding

An SR-MPLS TE-incapable device on an SR-MPLS TE network must be configured to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding.

Context

An SR-MPLS TE-incapable device on an SR-MPLS TE network can be configured to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding. The function resolves the forwarding issue on the SR-MPLS TE-incapable device.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run sr-te-simulate static-cr-lsp transit lsp-name incoming-interface interface-type interface-number sid segmentid outgoing-interface interface-type interface-number nexthop next-hop-address out-label implicit-null

    A device is enabled to simulate an SR-MPLS TE transit node to perform adjacency label-based forwarding.

    To modify parameters, except for lsp-name, run the sr-te-simulate static-cr-lsp transit command directly, without the need to run the undo sr-te-simulate static-cr-lsp transit command. This means that these parameters can be dynamically updated.

  3. Run commit

    The configuration is committed.

Verifying the OSPF SR-MPLS TE Tunnel Configuration

After configuring an automatic SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The SR-MPLS TE tunnel functions have been configured.

Procedure

  1. Run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask | mask-length ] ] command to check OSPF SR-MPLS routing table information.
  2. Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  3. Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to check LSP statistics.
  4. Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.

Configuring an IS-IS SR-MPLS TE Tunnel (Explicit Path Used)

If no controller is deployed to compute paths, an explicit path can be manually configured to perform segment routing-traffic engineering (SR-MPLS TE).

Usage Scenario

SR-MPLS TE is a new TE tunneling technology that uses SR as a control protocol. If no controller is deployed for path computation, an explicit path can be manually configured to achieve SR-MPLS TE.

Pre-configuration Tasks

Before configuring an IS-IS SR-MPLS TE tunnel, configure IS-IS to implement network layer connectivity.

Enabling MPLS TE

Before configuring an SR-MPLS TE tunnel, you need to enable MPLS TE on each involved node in the SR-MPLS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the device.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.

    • LSRs do not have default IDs. LSR IDs must be manually configured.

    • Using a loopback interface address as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  5. Run mpls te

    MPLS TE is enabled globally.

  6. (Optional) Enable interface-specific MPLS TE.

    In scenarios where the controller or ingress performs path computation, interface-specific MPLS TE must be enabled. In static explicit path scenarios, this step can be ignored.

    1. Run quit

      The system view is displayed.

    2. Run interface interface-type interface-number

      The view of the interface on which the MPLS TE tunnel is established is displayed.

    3. Run mpls

      MPLS is enabled on an interface.

    4. Run mpls te

      MPLS TE is enabled on the interface.

  7. Run commit

    The configuration is committed.

Enabling SR Globally

Enabling SR globally on forwarders is a prerequisite for SR-MPLS TE tunnel configuration.

Context

SR must be enabled globally before the SR function becomes available.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    Segment Routing is enabled globally, and the Segment Routing view is displayed.

  3. (Optional) Run protected-adj-sid delete-delay delay-time

    A delay in deleting protected adjacency SIDs is configured.

    In SR-MPLS TE scenarios, the corresponding entry cannot be deleted immediately when a protected adjacency SID fails. Otherwise, the backup path becomes invalid. As such, a delay needs to be configured to perform delayed entry deletion.

  4. (Optional) Run protected-adj-sid update-delay delay-time

    A delay in delivering protected adjacency SIDs to the forwarding table is configured.

    The protected-adj-sid update-delay command mainly applies to scenarios where the outbound interface associated with a protected adjacency SID changes from Down to Up. For example, if a neighbor fails, the local interface connected to the neighbor goes Down, and the protected adjacency SID associated with the interface becomes invalid, causing traffic to be switched to the backup path for forwarding.

    If the neighbor recovers but route convergence has not completed yet, the neighbor may not have the forwarding capability. In this situation, you can run the protected-adj-sid update-delay command to configure a delay in delivering the protected adjacency SID to the forwarding table, preventing traffic from being forwarded to the neighbor. This helps avoid packet loss.

  5. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS TE Functions

This section describes how to configure basic SR-MPLS TE functions.

Context

SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE tunnel is configured, the adjacency and node SIDs must be configured.

Procedure

  1. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run network-entity net-addr

      A network entity title (NET) is configured.

    4. Run cost-style { wide | compatible | wide-compatible }

      The IS-IS wide metric function is enabled.

    5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]

      IS-IS TE is enabled.

    6. Run segment-routing mpls

      IS-IS SR-MPLS is enabled.

    7. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current IS-IS instance.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    8. (Optional) Run segment-routing auto-adj-sid protected The function of automatically generating dynamic protected adjacency SIDs is enabled.
    9. (Optional) Run segment-routing mpls static adj-sid advertise

      Advertisement of static adjacency SIDs is enabled.

      Typically, in SR-MPLS TE scenarios, adjacency SIDs are dynamically generated and advertised by IGP. Static adjacency SIDs are manually configured. To enable the device to advertise such SIDs, run the segment-routing mpls static adj-sid advertise command in the IS-IS view.

    10. Run commit

      The configuration is committed.

    11. Run quit

      Return to the system view.

  2. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the loopback interface view is displayed.

    2. Run isis enable [ process-id ]

      IS-IS is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      The IP address is configured for the loopback interface.

    4. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the interface.

    5. Run commit

      The configuration is committed.

    6. Run quit

      Return to the system view.

  3. (Optional) Configure an adjacency SID.

    After IS-IS SR is enabled, an adjacency SID is automatically generated. To disable the automatic generation of adjacency SIDs, run the segment-routing auto-adj-sid disable command. The automatically generated adjacency SID may change after a device restart. If an explicit path uses such an adjacency SID and the associated device is restarted, this adjacency SID must be reconfigured. You can also manually configure an adjacency SID to facilitate the use of an explicit path.

    1. Run segment-routing

      The Segment Routing view is displayed.

    2. Run ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value [ vpn-instance vpn-name ] or ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value protected

      A static SR adjacency SID is configured.

      To enable the device to steer traffic to a VPN link based on the static adjacency SID, specify the vpn-instance vpn-name parameter.

      To configure an adjacency SID carrying the protection flag so that it can be protected by another adjacency SID, specify the protected parameter.

    3. Run commit

      The configuration is committed.

Configuring an SR-MPLS TE Explicit Path

An explicit path over which an SR-MPLS TE tunnel is to be established is configured on the ingress. You can specify node or link labels for the explicit path.

Context

An explicit path is a vector path comprised of a series of nodes that are arranged in the configuration sequence. The path through which an SR-MPLS TE LSP passes can be planned by specifying next-hop labels or next-hop IP addresses on an explicit path. Generally, an IP address specified on an explicit path is the IP address of an interface. An explicit path in use can be dynamically updated.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run explicit-path path-name

    An explicit path is created, and the explicit path view is displayed.

  3. Select one of the following methods to configure an explicit path:

    • Run the next sid label label-value type { adjacency | prefix | binding-sid } [ index index-value ] command to specify a next-hop label for the explicit path.
    • Specify a next-hop address for the explicit path.

      In this scenario, the mpls te cspf command must be run on the ingress to enable CSPF.

      1. Run the next hop ipv4 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] [ index index-value ] command to specify a next-hop node for the explicit path.

        The include parameter indicates that an LSP must pass through a specified node. The exclude parameter indicates that an LSP cannot pass through a specified node.

      2. (Optional) Run the add hop ipv4 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] { after | before } contrastIpv4 command to add a node to the explicit path.

      3. (Optional) Run the modify hop contrastIpv4 ipv4 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] command to change the address of a node to allow another specified node to be used by the explicit path.

      4. (Optional) Run the delete hop ipv4 command to remove a node from the explicit path.
    • Specify both the next-hop label and IP address for the explicit path.

      In this scenario, the mpls te cspf command must be run on the ingress to enable CSPF.

      1. Run the next sid label label-value type { adjacency | prefix | binding-sid } or next hop ip-address [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] [ index index-value ] command to specify a next-hop label or the next node for the explicit path.
      2. (Optional) Run the add sid label label-value type { adjacency | prefix | binding-sid } { after | before } { hop contrastIpv4 | index indexValue | sid label contrastLabel index indexValue } or add hop ipv4 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] { after | before } { hop contrastIpv4 | index indexValue | sid label contrastLabel index indexValue } command to add a label or node to the explicit path.
      3. (Optional) Run the modify hop contrastIpv4 to { hop ipv4 [ exclude | include [ [ strict | loose ] | [ incoming | outgoing ] ] * ] | sid label label-value type { adjacency | prefix } } or modify index indexValue to { hop ipv4 [ exclude | include [ [ strict | loose ] | [ incoming | outgoing ] ] * ] | sid label label-value type { adjacency | prefix | binding-sid } } or modify sid label contrastLabel index indexValue to { hop ipv4 [ exclude | include [ [ strict | loose ] | [ incoming | outgoing ] ] * ] | sid label label-value type { adjacency | prefix | binding-sid } } command to modify a label or node of the explicit path.
      4. (Optional) Run the delete { sid label labelValue index indexValue | index indexValue } or delete hop ip-address command to delete a label or node from the explicit path.

  4. Run commit

    The configuration is committed.

Configuring an SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    A tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure the IP address of the tunnel interface, run the ip address ip-address { mask | mask-length } [ sub ] command.

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run the ip address unnumbered interface interface-type interface-number command.

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. The LSR ID of the ingress is generally used as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address is configured, which is usually the LSR ID of the egress.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  9. Run mpls te path explicit-path path-name [ secondary ]

    An explicit path is configured for the tunnel.

    The path-name value must be the same as that specified in the explicit-path path-name command.

  10. (Optional) Run mpls te path verification enable

    Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using this label is automatically set to Down.

    This function does not need to be configured if the controller or BFD is used.

    To enable this function globally, run the mpls te path verification enable command in the MPLS view.

  11. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  12. Run commit

    The configuration is committed.

Configuring the UCMP Function of the SR-MPLS TE Tunnel

When multiple SR-MPLS TE tunnels are directed to the downstream device, the load balancing weights can be configured to perform the Unequal Cost Multi-Path Load Balance (UCMP) on the SR-MPLS TE tunnels.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run load-balance unequal-cost enable

    UCMP is enabled globally.

  3. Run interface tunnel tunnel-number

    The SR-MPLS TE tunnel interface view is displayed.

  4. Run load-balance unequal-cost weight weight

    A weight is set for an SR-MPLS TE tunnel before UCMP is carried out among tunnels.

  5. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring an automatic SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The SR-MPLS TE tunnel functions have been configured.

Procedure

  • Run the following commands to check the IS-IS TE status:
    • display isis traffic-eng advertisements [ lspId | local ] [ level-1 | level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
    • display isis traffic-eng statistics [ process-id | vpn-instance vpn-instance-name ]
  • Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  • Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to check LSP statistics.
  • Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
  • (Optional) If the label stack depth exceeds the upper limit supported by a forwarder, the controller needs to divide a label stack into multiple stacks for an entire path. After the controller assigns a stick label to a stick node, run the display mpls te stitch-label-stack command to check information about the stick label stack mapped to the stick label.

Configuring an IS-IS SR-MPLS TE Tunnel (Path Computation on a Forwarder)

If no controller is deployed to compute paths, CSPF can be configured on the ingress to perform SR-MPLS TE.

Usage Scenario

SR-MPLS TE is a new TE tunneling technology that uses SR as a control protocol. If no controller is deployed to compute paths, CSPF can be configured on the ingress to perform SR-MPLS TE.

Pre-configuration Tasks

Before configuring an IS-IS SR-MPLS TE tunnel, configure IS-IS to implement network layer connectivity.

Enabling MPLS TE

Before configuring an SR-MPLS TE tunnel, you need to enable MPLS TE on each involved node in the SR-MPLS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the device.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.

    • LSRs do not have default IDs. LSR IDs must be manually configured.

    • Using a loopback interface address as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  5. Run mpls te

    MPLS TE is enabled globally.

  6. (Optional) Enable interface-specific MPLS TE.

    In scenarios where the controller or ingress performs path computation, interface-specific MPLS TE must be enabled. In static explicit path scenarios, this step can be ignored.

    1. Run quit

      The system view is displayed.

    2. Run interface interface-type interface-number

      The view of the interface on which the MPLS TE tunnel is established is displayed.

    3. Run mpls

      MPLS is enabled on an interface.

    4. Run mpls te

      MPLS TE is enabled on the interface.

  7. Run commit

    The configuration is committed.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Enabling SR Globally

Enabling SR globally on forwarders is a prerequisite for SR-MPLS TE tunnel configuration.

Context

SR must be enabled globally before the SR function becomes available.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    Segment Routing is enabled globally, and the Segment Routing view is displayed.

  3. (Optional) Run protected-adj-sid delete-delay delay-time

    A delay in deleting protected adjacency SIDs is configured.

    In SR-MPLS TE scenarios, the corresponding entry cannot be deleted immediately when a protected adjacency SID fails. Otherwise, the backup path becomes invalid. As such, a delay needs to be configured to perform delayed entry deletion.

  4. (Optional) Run protected-adj-sid update-delay delay-time

    A delay in delivering protected adjacency SIDs to the forwarding table is configured.

    The protected-adj-sid update-delay command mainly applies to scenarios where the outbound interface associated with a protected adjacency SID changes from Down to Up. For example, if a neighbor fails, the local interface connected to the neighbor goes Down, and the protected adjacency SID associated with the interface becomes invalid, causing traffic to be switched to the backup path for forwarding.

    If the neighbor recovers but route convergence has not completed yet, the neighbor may not have the forwarding capability. In this situation, you can run the protected-adj-sid update-delay command to configure a delay in delivering the protected adjacency SID to the forwarding table, preventing traffic from being forwarded to the neighbor. This helps avoid packet loss.

  5. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS TE Functions

This section describes how to configure basic SR-MPLS TE functions.

Context

SR-MPLS TE supports both strict and loose explicit paths. Strict explicit paths mainly use adjacency SIDs, whereas loose explicit paths use both adjacency and node SIDs. Adjacency and node SIDs must be generated before you configure an SR-MPLS TE tunnel.

Procedure

  1. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run isis [ process-id ]

      The IS-IS view is displayed.

    3. Run network-entity net-addr

      A network entity title (NET) is configured.

    4. Run cost-style { wide | compatible | wide-compatible }

      The IS-IS wide metric function is enabled.

    5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]

      IS-IS TE is enabled.

    6. Run segment-routing mpls

      IS-IS SR-MPLS is enabled.

    7. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRGB range is configured for the current IS-IS instance.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    8. (Optional) Run segment-routing auto-adj-sid protected

      The function of automatically generating dynamic protected adjacency SIDs is enabled.

    9. (Optional) Run segment-routing mpls static adj-sid advertise

      Advertisement of static adjacency SIDs is enabled.

      Typically, in SR-MPLS TE scenarios, adjacency SIDs are dynamically generated and advertised by IGP. Static adjacency SIDs are manually configured. To enable the device to advertise such SIDs, run the segment-routing mpls static adj-sid advertise command in the IS-IS view.

    10. Run commit

      The configuration is committed.

    11. Run quit

      Return to the system view.

  2. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the interface view is displayed.

    2. Run isis enable [ process-id ]

      IS-IS is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      An IP address is configured for the loopback interface.

    4. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the interface.

    5. Run commit

      The configuration is committed.

  3. (Optional) Configure an adjacency SID.

    After IS-IS SR is enabled, an adjacency SID is automatically generated. To disable the automatic generation of adjacency SIDs, run the segment-routing auto-adj-sid disable command. Dynamically generated adjacency SIDs may change after a device restart. If an explicit path uses such an adjacency SID and the associated device is restarted, the adjacency SID needs to be reconfigured. You can also manually configure an adjacency SID to facilitate the use of an explicit path.

    1. Run segment-routing

      The SR view is displayed.

    2. Run ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value [ vpn-instance vpn-name ] or ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-ip-address sid sid-value protected

      A static SR adjacency SID is configured.

      To enable the device to steer traffic to a VPN link based on the static adjacency SID, specify the vpn-instance vpn-name parameter.

      To configure an adjacency SID carrying the protection flag so that it can be protected by another adjacency SID, specify the protected parameter.

    3. Run commit

      The configuration is committed.

Enabling the Ingress to Compute a Path

CSPF is configured on the ingress to compute a path for an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls

    The MPLS view is displayed.

  3. Run mpls te

    MPLS TE is enabled.

  4. Run mpls te cspf

    CSPF is enabled on the ingress to compute paths.

  5. Run commit

    The configuration is committed.

Configuring an SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    A tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure the IP address of the tunnel interface, run the ip address ip-address { mask | mask-length } [ sub ] command.

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run the ip address unnumbered interface interface-type interface-number command.

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. The LSR ID of the ingress is generally used as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address is configured, which is usually the LSR ID of the egress.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  9. (Optional) Run mpls te cspf path-selection adjacency-sid

    By default, this function is disabled.

    If the mpls te cspf path-selection adjacency-sid command is not run, both node and adjacency SIDs are used in CSPF path computation for an LSP in an SR-MPLS TE tunnel.

  10. (Optional) Run mpls te path verification enable

    By default, path verification is disabled for SR-MPLS TE tunnels.

    This function does not need to be configured if the controller or BFD is used.

    To enable this function globally, run the mpls te path verification enable command in the MPLS view.

  11. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  12. Run commit

    The configuration is committed.

Configuring the UCMP Function of the SR-MPLS TE Tunnel

When multiple SR-MPLS TE tunnels are directed to the downstream device, the load balancing weights can be configured to perform the Unequal Cost Multi-Path Load Balance (UCMP) on the SR-MPLS TE tunnels.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run load-balance unequal-cost enable

    UCMP is enabled globally.

  3. Run interface tunnel tunnel-number

    The SR-MPLS TE tunnel interface view is displayed.

  4. Run load-balance unequal-cost weight weight

    A weight is set for an SR-MPLS TE tunnel before UCMP is carried out among tunnels.

  5. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring an automatic SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The SR-MPLS TE tunnel functions have been configured.

Procedure

  • Run the following commands to check the IS-IS TE status:
    • display isis traffic-eng advertisements [ lspId | local ] [ level-1 | level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
    • display isis traffic-eng statistics [ process-id | vpn-instance vpn-instance-name ]
  • Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  • Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to check LSP statistics.
  • Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
  • (Optional) If the label stack depth exceeds the upper limit supported by a forwarder, the controller needs to divide a label stack into multiple stacks for an entire path. After the controller assigns a stick label to a stick node, run the display mpls te stitch-label-stack command to check information about the stick label stack mapped to the stick label.

Configuring an OSPF SR-MPLS TE Tunnel (Path Computation on a Forwarder)

If no controller is deployed to compute paths, CSPF can be configured on the ingress to perform SR-MPLS TE.

Usage Scenario

SR-MPLS TE is a new TE tunneling technology that uses SR as a control protocol. If no controller is deployed to compute paths, CSPF can be configured on the ingress to perform SR-MPLS TE.

Pre-configuration Tasks

Before configuring an OSPF SR-MPLS TE tunnel, configure OSPF to implement connectivity at the network layer.

Enabling MPLS TE

Before configuring an SR-MPLS TE tunnel, you need to enable MPLS TE on each involved node in the SR-MPLS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls lsr-id lsr-id

    An LSR ID is configured for the device.

    Note the following during LSR ID configuration:
    • Configuring LSR IDs is the prerequisite for all MPLS configurations.

    • LSRs do not have default IDs. LSR IDs must be manually configured.

    • Using a loopback interface address as the LSR ID is recommended for an LSR.

  3. Run mpls

    The MPLS view is displayed.

  4. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  5. Run mpls te

    MPLS TE is enabled globally.

  6. (Optional) Enable interface-specific MPLS TE.

    In scenarios where the controller or ingress performs path computation, interface-specific MPLS TE must be enabled. In static explicit path scenarios, this step can be ignored.

    1. Run quit

      The system view is displayed.

    2. Run interface interface-type interface-number

      The view of the interface on which the MPLS TE tunnel is established is displayed.

    3. Run mpls

      MPLS is enabled on an interface.

    4. Run mpls te

      MPLS TE is enabled on the interface.

  7. Run commit

    The configuration is committed.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Globally Enabling the SR Capability

Globally enabling the SR capability on forwarders is a prerequisite for configuring an SR-MPLS TE tunnel.

Context

SR must be enabled globally before the SR function becomes available.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally.

  3. Run commit

    The configuration is committed.

Configuring Basic SR-MPLS TE Functions

This section describes how to configure basic SR-MPLS TE functions.

Context

SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE tunnel is configured, the adjacency and node SIDs must be configured.

Procedure

  1. Configure an SR-MPLS-specific SRGB range.
    1. Run system-view

      The system view is displayed.

    2. Run ospf [ process-id ]

      The OSPF view is displayed.

    3. Run opaque-capability enable

      The Opaque LSA capability is enabled.

    4. Run segment-routing mpls

      OSPF SR-MPLS is enabled.

    5. Run segment-routing global-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific OSPF SRGB range is configured.

      If a message is displayed indicating that a label in the specified SRGB range is in use, you can use the ignore-conflict parameter to enable configuration delivery. However, the configuration will not take effect until the device is restarted and the label is released. In general, using the ignore-conflict parameter is not recommended.

    6. Run area area-id

      The OSPF area view is displayed.

    7. Run mpls-te enable [ standard-complying ]

      TE is enabled in the OSPF area.

    8. Run commit

      The configuration is committed.

    9. Run quit

      Return to the system view.

  2. Configure an SR-MPLS prefix SID.
    1. Run interface loopback loopback-number

      A loopback interface is created, and the loopback interface view is displayed.

    2. Run ospf enable [ process-id ] area area-id

      OSPF is enabled on the interface.

    3. Run ip address ip-address { mask | mask-length }

      The IP address is configured for the loopback interface.

    4. Run ospf prefix-sid { absolute sid-value | index index-value } [ node-disable ]

      An SR-MPLS prefix SID is configured for the IP address of the interface.

    5. Run commit

      The configuration is committed.

Enabling the Ingress to Compute a Path

CSPF is configured on the ingress to compute a path for an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls

    The MPLS view is displayed.

  3. Run mpls te

    MPLS TE is enabled.

  4. Run mpls te cspf

    CSPF is enabled on the ingress to compute paths.

  5. Run commit

    The configuration is committed.

Configuring an SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an SR-MPLS TE tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    A tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure the IP address of the tunnel interface, run the ip address ip-address { mask | mask-length } [ sub ] command.

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run the ip address unnumbered interface interface-type interface-number command.

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. The LSR ID of the ingress is generally used as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address is configured, which is usually the LSR ID of the egress.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. (Optional) Run mpls te sid selection unprotected-only

    The device is enabled to consider only unprotected labels when computing paths for SR-MPLS TE tunnels.

    To enable the device to consider only unprotected labels during SR-MPLS TE tunnel path computation, perform this step.

  9. (Optional) Run mpls te cspf path-selection adjacency-sid

    By default, this function is disabled.

    If the mpls te cspf path-selection adjacency-sid command is not run, both node and adjacency SIDs are used in CSPF path computation for an LSP in an SR-MPLS TE tunnel.

  10. (Optional) Run mpls te path verification enable

    By default, path verification is disabled for SR-MPLS TE tunnels.

    This function does not need to be configured if the controller or BFD is used.

    To enable this function globally, run the mpls te path verification enable command in the MPLS view.

  11. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  12. Run commit

    The configuration is committed.

Configuring the UCMP Function of the SR-MPLS TE Tunnel

When multiple SR-MPLS TE tunnels are directed to the downstream device, the load balancing weights can be configured to perform the Unequal Cost Multi-Path Load Balance (UCMP) on the SR-MPLS TE tunnels.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run load-balance unequal-cost enable

    UCMP is enabled globally.

  3. Run interface tunnel tunnel-number

    The SR-MPLS TE tunnel interface view is displayed.

  4. Run load-balance unequal-cost weight weight

    A weight is set for an SR-MPLS TE tunnel before UCMP is carried out among tunnels.

  5. Run commit

    The configuration is committed.

Verifying the OSPF SR-MPLS TE Tunnel Configuration

After configuring an automatic SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The SR-MPLS TE tunnel functions have been configured.

Procedure

  1. Run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask | mask-length ] ] command to check OSPF SR-MPLS routing table information.
  2. Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  3. Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to check LSP statistics.
  4. Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.

Configuring BGP for SR-MPLS

A controller orchestrates IGP SIDs and BGP SR-allocated BGP peer SIDs to implement inter-AS forwarding over the optimal path.

Context

BGP is a dynamic routing protocol used between ASs. BGP SR is a BGP extension for SR and is used for inter-AS source routing.

BGP SR uses BGP egress peer engineering (EPE) to allocate Peer-Node and Peer-Adj SIDs to peers or Peer-Set SIDs to peer groups. These SIDs can be reported to the controller through BGP-LS for orchestrating E2E SR-MPLS TE tunnels.

Procedure

  • Configure BGP EPE.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled.

    3. Run quit

      Return to the system view.

    4. Run bgp { as-number-plain | as-number-dot }

      BGP is enabled, and the BGP view is displayed.

    5. Run peer ipv4-address as-number { as-number-plain | as-number-dot }

      A BGP peer is created.

    6. Run peer ipv4-address egress-engineering [ label static-label ]

      BGP EPE is enabled.

    7. Run peer ipv4-address egress-engineering link-down { relate-bfd-state| label-pop }

      BGP EPE link fault association is enabled.

      The relate-bfd-state and label-pop parameters are mutually exclusive. Therefore, you can configure only one of them.

      • With the relate-bfd-state parameter configured, if the inter-AS link corresponding to a BGP EPE label fails, BGP EPE triggers SBFD for SR-MPLS TE tunnel in the local AS to go down. This enables the ingress of the involved SR-MPLS TE tunnel to rapidly detect the failure and switch traffic to another normal tunnel.
      • With the label-pop parameter configured, if the inter-AS link corresponding to a BGP EPE label fails, the ASBR in the local AS pops the BGP EPE label from the received data packet and searches the IP routing table based on the destination address of the packet for forwarding.

    8. (Optional) Configure a peer set.

      1. Run egress-engineering peer-set peer-set-name [ label static-label ]

        A BGP peer set is created.

      2. Run peer ipv4-address peer-set name peer-set-name

        A specified peer is added to the BGP peer set.

      3. Run egress-engineering peer-set peer-set-name link-down { relate-bfd-state | label-pop }

        Link fault association is configured for the BGP peer set.

    9. (Optional) Run egress-engineering confederation-id compatible

      The device is configured to modify the BGP-LS packet format and fill the member AS number in the AS field of the BGP-LS route prefix in a BGP EPE confederation scenario.

    10. (Optional) Run egress-engineering traffic-statistics enable

      BGP EPE traffic statistics collection is enabled.

      To clear traffic statistics for re-collection, run the reset bgp egress-engineering traffic-statistics command.

    11. Run commit

      The configuration is committed.

    12. Run quit

      Return to the system view

  • Configure BGP-LS.

    BGP-LS provides a simple and efficient method of collecting topology information. You must configure a BGP-LS peer relationship between the controller and forwarder, so that topology information can be properly reported from the forwarder to the controller. This example provides the procedure for configuring a BGP-LS peer relationship on the forwarder. The procedure on the controller is similar to that on the forwarder.

    1. Run bgp { as-number-plain | as-number-dot }

      BGP is enabled, and the BGP view is displayed.

    2. Run peer ipv4-address as-number as-number-plain

      A BGP peer is created.

    3. Run link-state-family unicast

      BGP-LS is enabled, and the BGP-LS address family view is displayed.

    4. Run peer { group-name | ipv4-address } enable

      The device is enabled to exchange BGP-LS routing information with a specified peer or peer group.

    5. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring BGP SR, run the display bgp egress-engineering command to check BGP EPE information and determine whether the configuration succeeds.

After BGP EPE traffic statistics collection is enabled, you can run the display bgp egress-engineering traffic-statistics outbound command to check the outgoing traffic and the display bgp egress-engineering traffic-statistics inbound command to check the incoming traffic.

Configuring an Inter-AS E2E SR-MPLS TE Tunnel (Path Computation on the Controller)

An inter-AS E2E SR-MPLS TE tunnel can connect SR-MPLS TE tunnels in multiple AS domains to build a large-scale TE network.

Usage Scenario

SR-MPLS TE, a new MPLS TE tunneling technology, has unique advantages in label distribution, protocol simplification, large-scale expansion, and fast path adjustment. SR-MPLS TE can better cooperate with SDN.

The SR that is extended through an IGP can implement SR-MPLS TE only within an AS domain. To implement inter-AS E2E SR-MPLS TE tunnels, BGP EPE needs to be used to allocate peer SIDs to the adjacencies and nodes between AS domains. Peer SIDs can be advertised to the network controller using BGP-LS. The controller uses the explicit paths to orchestrate IGP SIDs and BGP peer SIDs to implement inter-AS optimal path forwarding on the network shown in Figure 1-2650.

In addition, the label depth supported by an ordinary forwarder is limited, whereas the depth of the label stack of an inter-AS SR-MPLS TE tunnel may exceed the maximum depth supported by a forwarder. To reduce the number of label stack layers encapsulated by the forwarder, use binding SIDs. When configuring an intra-AS SR-MPLS TE tunnel, set a binding SID for the tunnel. The binding SID identifies an SR-MPLS TE tunnel and replaces the label stack of an SR-MPLS TE tunnel.

Figure 1-2650 Inter-AS E2E SR-MPLS TE tunnel networking

Pre-configuration Tasks

Before configuring an inter-AS E2E SR-MPLS TE tunnel, complete the following tasks:

Setting a Binding SID

Using binding SIDs reduces the number of labels in a label stack on an NE, which helps build a large-scale network.

Context

To reduce the number of label stack layers encapsulated by an NE, a binding SID needs to be used. A binding SID can represent the label stack of an intra-AS SR-MPLS TE tunnel. After binding SIDs and BGP peer SIDs are orchestrated properly, E2E SR-MPLS TE LSPs can be established.

An SR-MPLS TE tunnel is unidirectional. In the following operations, the binding SID is set for the unidirectional SR-MPLS TE tunnel only within an AS domain.
  • To set a binding SID of a reverse SR-MPLS TE tunnel, perform the configuration on the ingress of the reverse tunnel.
  • To set a binding SID of an SR-MPLS TE tunnel in another AS domain, perform the configuration on the ingress of the specific AS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    The intra-AS SR-MPLS TE tunnel interface view is displayed.

  3. Run mpls te binding-sid label label-value

    A binding SID is specified for the intra-AS SR-MPLS TE tunnel.

  4. Run commit

    The configuration is committed.

Configuring an E2E SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an E2E SR-MPLS TE tunnel.

Context

An SR-MPLS TE tunnel is unidirectional. To configure a reverse tunnel, perform the configuration on the ingress of the reverse tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    An inter-AS E2E tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure an IP address for the tunnel interface, run the ip address ip-address { mask | mask-length } [ sub ] command.

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run the ip address unnumbered interface interface-type interface-number command.

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address, which is usually the LSR ID of the egress, is configured.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. Run mpls te pce delegate

    PCE server delegation is enabled so that the controller can compute paths.

  9. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  10. Run commit

    The configuration is committed.

(Optional) Configuring SR on a PCC

The SR capability is configured on a PCC. After a controller calculates a path and delivers path information to a forwarder (PCC), the SR-enabled PCC can establish an SR-MPLS TE tunnel.

Context

SR can be configured on a PCC (forwarder). Path computation is then delegated to the controller. After the controller calculates a path, the controller sends a PCEP message to deliver path information to the PCC (forwarder).

The path information can also be delivered by a third-party adapter to the forwarder. In this situation, SR does not need to be configured on the PCC, and the following operation can be skipped.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run pce-client

    A PCE client is configured, and the PCE client view is displayed.

  3. Run capability segment-routing

    SR is enabled for the PCE client.

  4. Run connect-server ip-address

    A candidate PCE server is configured for the PCE client.

  5. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring an inter-AS E2E SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The inter-AS E2E SR-MPLS TE tunnel has been configured.

Procedure

  1. Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  2. Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
  3. Run the display mpls te binding-sid [ label label-value ] command to check the mapping between binding SIDs and tunnels.

Configuring an Inter-AS E2E SR-MPLS TE Tunnel (Explicit Path Used)

An inter-AS E2E SR-MPLS TE tunnel can connect SR-MPLS TE tunnels in multiple AS domains to build a large-scale TE network.

Usage Scenario

SR-MPLS TE, a new MPLS TE tunneling technology, has unique advantages in label distribution, protocol simplification, large-scale expansion, and fast path adjustment. SR-MPLS TE can better cooperate with SDN.

The SR that is extended through an IGP can implement SR-MPLS TE only within an AS domain. To implement inter-AS E2E SR-MPLS TE tunnels, BGP EPE needs to be used to allocate peer SIDs to the adjacencies and nodes between AS domains. The controller uses the explicit paths to orchestrate IGP SIDs and BGP peer SIDs to implement inter-AS optimal path forwarding on the network shown in Figure 1-2651.

In addition, the label depth supported by an ordinary forwarder is limited, whereas the depth of the label stack of an inter-AS SR-MPLS TE tunnel may exceed the maximum depth supported by a forwarder. To reduce the number of label stack layers encapsulated by the forwarder, use binding SIDs. When configuring an intra-AS SR-MPLS TE tunnel, set a binding SID for the tunnel. The binding SID identifies an SR-MPLS TE tunnel and replaces the label stack of an SR-MPLS TE tunnel.

Figure 1-2651 Inter-AS E2E SR-MPLS TE tunnel networking

Pre-configuration Tasks

Before configuring an inter-AS E2E SR-MPLS TE tunnel, complete the following tasks:

  • Configure an intra-AS SR-MPLS TE tunnel.

  • Configure BGP EPE between ASBRs. For details, see Configuring BGP SR.

Setting a Binding SID

Using binding SIDs reduces the number of labels in a label stack on an NE, which helps build a large-scale network.

Context

To reduce the number of label stack layers encapsulated by an NE, a binding SID needs to be used. A binding SID can represent the label stack of an intra-AS SR-MPLS TE tunnel. After binding SIDs and BGP peer SIDs are orchestrated properly, E2E SR-MPLS TE LSPs can be established.

An SR-MPLS TE tunnel is unidirectional. In the following operations, the binding SID is set for the unidirectional SR-MPLS TE tunnel only within an AS domain.
  • To set a binding SID of a reverse SR-MPLS TE tunnel, perform the configuration on the ingress of the reverse tunnel.
  • To set a binding SID of an SR-MPLS TE tunnel in another AS domain, perform the configuration on the ingress of the specific AS domain.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    The intra-AS SR-MPLS TE tunnel interface view is displayed.

  3. Run mpls te binding-sid label label-value

    A binding SID is specified for the intra-AS SR-MPLS TE tunnel.

  4. Run commit

    The configuration is committed.

Configuring an SR-MPLS TE Explicit Path

An explicit path over which an SR-MPLS TE tunnel is to be established is configured on the ingress. You can specify node or link labels for the explicit path.

Context

An explicit path is a vector path comprised of a series of nodes that are arranged in the configuration sequence. The path through which an SR-MPLS TE LSP passes can be planned by specifying next-hop labels or next-hop IP addresses on an explicit path. Generally, an IP address specified on an explicit path is the IP address of an interface. An explicit path in use can be dynamically updated.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run explicit-path path-name

    An explicit path is created and the explicit path view is displayed.

  3. Configure an explicit path.

    In the following example, two ASs are connected. If there are multiple ASs, add configurations based on the network topology.

    1. Run next sid label label-value type binding-sid [ index index-value ]

      A binding SID is specified for the first AS of the explicit path.

      When the first hop of an explicit path is assigned a binding SID, the explicit path supports a maximum of three hops.

    2. Run next sid label label-value type adjacency [ index index-value ]

      An inter-AS adjacency label is specified.

    3. Run next sid label label-value type binding-sid [ index index-value ]

      A binding SID is specified for the second AS of the explicit path.

      In the case of multiple ASs, this binding SID can be the binding SID of a new E2E SR-MPLS TE tunnel.

  4. Run commit

    The configuration is committed.

Configuring an E2E SR-MPLS TE Tunnel Interface

A tunnel interface must be configured on an ingress so that the interface is used to establish and manage an E2E SR-MPLS TE tunnel.

Context

An SR-MPLS TE tunnel is unidirectional. To configure a reverse tunnel, perform the configuration on the ingress of the reverse tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface tunnel tunnel-number

    An inter-AS E2E tunnel interface is created, and the tunnel interface view is displayed.

  3. Run either of the following commands to assign an IP address to the tunnel interface:

    • To configure an IP address for the tunnel interface, run the ip address ip-address { mask | mask-length } [ sub ] command.

      The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.

    • To configure the tunnel interface to borrow the IP address of another interface, run the ip address unnumbered interface interface-type interface-number command.

    The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the tunnel interface's IP address.

  4. Run tunnel-protocol mpls te

    MPLS TE is configured as a tunneling protocol.

  5. Run destination ip-address

    A tunnel destination address, which is usually the LSR ID of the egress, is configured.

    Various types of tunnels require specific destination addresses. If a tunnel protocol is changed from another protocol to MPLS TE, a configured destination address is deleted automatically and a new destination address needs to be configured.

  6. Run mpls te tunnel-id tunnel-id

    A tunnel ID is set.

  7. Run mpls te signal-protocol segment-routing

    Segment Routing is configured as the signaling protocol of the TE tunnel.

  8. Run mpls te path explicit-path path-name [ secondary ]

    An explicit path is configured for the tunnel.

    The path-name value must be the same as that specified in the explicit-path path-name command.

  9. (Optional) Run the match dscp { ipv4 | ipv6 } { default | { dscp-value [ to dscp-value ] } &<1-32> } command to configure DSCP values for IPv4/IPv6 packets to enter the SR-MPLS TE tunnel.

    The DSCP configuration and mpls te service-class command configuration of an SR-MPLS TE tunnel are mutually exclusive.

  10. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring an inter-AS E2E SR-MPLS TE tunnel, verify information about the SR-MPLS TE tunnel and its status statistics.

Prerequisites

The inter-AS E2E SR-MPLS TE tunnel has been configured.

Procedure

  1. Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
  2. Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
  3. Run the display mpls te binding-sid [ label label-value ] command to check the mapping between binding SIDs and tunnels.

Configuring Traffic Steering into SR-MPLS TE Tunnels

Traffic steering configuration enables a device to recurse routes to SR-MPLS TE tunnels and forward data using these tunnels.

Configuring Traffic Steering into SR-MPLS TE Tunnels Based on the Next Hops of Routes

This section describes how to configure traffic steering into SR-MPLS TE tunnels based on the next hops of routes.

Usage Scenario

After an SR-MPLS TE tunnel is configured, traffic needs to be steered into the tunnel for forwarding. This process is called traffic steering. Currently, SR-MPLS TE tunnels can be used for various routes and services, such as BGP and static routes as well as BGP4+ 6PE, BGP L3VPN and EVPN services. The main traffic steering modes supported by SR-MPLS TE tunnels are as follows:
  • Static route: When configuring a static route, set the outbound interface of the route to an SR-MPLS TE tunnel interface so that traffic transmitted over the route is steered into the SR-MPLS TE tunnel. For configuration details, see Creating IPv4 Static Routes.
  • Auto route: An IGP uses an auto route related to an SR-MPLS TE tunnel that functions as a logical link to compute a path. The tunnel interface is used as an outbound interface in the auto route. For configuration details, see Configuring IGP Shortcut.
  • Policy-based routing (PBR): SR-MPLS TE PBR has the same definition as IP unicast PBR. PBR is implemented by defining a series of matching rules and behavior. An outbound interface in an apply clause is set to an interface on an SR-MPLS TE tunnel. If packets do not match PBR rules, they are properly forwarded using IP; if they match PBR rules, they are forwarded over specific tunnels. For configuration details, see Policy-based Routing Configuration.
  • Tunnel policy: The tunnel policy mode is implemented through tunnel selector or tunnel binding configuration. This mode allows both VPN services and non-labeled public routes to recurse to SR-MPLS TE tunnels. The configuration varies according to the service type.

Pre-configuration Tasks

Before configuring traffic steering into SR-MPLS TE tunnels, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, and EVPN services correctly.

  • Configure a filter, such as an IP prefix list, if you want to restrict the route recursive to a specified SR-MPLS TE tunnel.

Procedure

  1. Configure a tunnel policy.

    Select either of the following modes based on the traffic steering mode you select.

    The tunnel binding mode rather than the tunnel selection sequence mode allows you to specify the SR-MPLS TE tunnel to be used, facilitating QoS deployment. The tunnel selector mode applies to inter-AS VPN Option B and inter-AS VPN Option C scenarios.

    • Tunnel selection sequence

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        Description information is configured for the tunnel policy.

      4. Run tunnel select-seq sr-te load-balance-number load-balance-number [ unmix ]

        The tunnel selection sequence and number of tunnels for load balancing are configured.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Return to the system view.

    • Tunnel binding

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        Description information is configured for the tunnel policy.

      4. Run tunnel binding destination dest-ip-address te { tunnel-name } &<1-32> [ ignore-destination-check ] [ down-switch ]

        A tunnel binding policy is configured to bind the specified destination IP address and SR-MPLS TE tunnel.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Return to the system view.

    • Tunnel selector

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-selector tunnel-selector-name { permit | deny } node node

        A tunnel selector is created and the view of the tunnel selector is displayed.

      3. (Optional) Configure if-match clauses.

      4. Run apply tunnel-policy tunnel-policy-name

        A specified tunnel policy is applied to the tunnel selector.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Return to the system view.

  2. Configure routes and services to recurse to SR-MPLS TE tunnels.
    • Configure non-labeled public BGP routes and static routes to recurse to SR-MPLS TE tunnels.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse non-labeled public BGP routes and static routes to SR-MPLS TE tunnels is enabled.

      2. Run commit

        The configuration is committed.

    • Configure non-labeled public BGP routes to recurse to SR-MPLS TE tunnels.

      For details about how to configure a non-labeled public BGP route, see Configuring Basic BGP Functions.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run unicast-route recursive-lookup tunnel [ tunnel-selector tunnel-selector-name ]

        The function to recurse non-labeled public BGP routes to SR-MPLS TE tunnels is enabled.

        The unicast-route recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      3. Run commit

        The configuration is committed.

    • Configure static routes to recurse to SR-MPLS TE tunnels.

      For details about how to configure a static route, see Creating IPv4 Static Routes.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse static routes to SR-MPLS TE tunnels for MPLS forwarding is enabled.

        The ip route-static recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      2. Run commit

        The configuration is committed.

    • Configure BGP L3VPN services to recurse to SR-MPLS TE tunnels.

      For details about how to configure a BGP L3VPN service, see BGP/MPLS IP VPN Configuration.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance-specific IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A specified tunnel policy is applied to the VPN instance-specific IPv4 address family.

      4. Run commit

        The configuration is committed.

    • Configure BGP L3VPNv6 services to recurse to SR-MPLS TE tunnels.

      For details about how to configure a BGP L3VPNv6 service, see BGP/MPLS IPv6 VPN Configuration.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance-specific IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A specified tunnel policy is applied to the VPN instance-specific IPv6 address family.

      4. Run commit

        The configuration is committed.

    • Configure BGP4+ 6PE services to recurse to SR-MPLS TE tunnels.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      Method 1: Apply a tunnel policy to a specified BGP4+ peer.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        A specified 6PE peer is enabled.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A specified tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

      Method 2: Apply a tunnel selector to all the routes of a specified BGP IPv6 unicast address family.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run unicast-route recursive-lookup tunnel-v4 [ tunnel-selector tunnel-selector-name ]

        The function to recurse non-labeled public BGP routes to SR-MPLS TE tunnels is enabled.

        The unicast-route recursive-lookup tunnel command and route recursive-lookup tunnel command are mutually exclusive. You can select either of them for configuration.

      4. Run commit

        The configuration is committed.

    • Configure EVPN services to recurse to SR-MPLS TE tunnels.

      For details on how to configure EVPN, see EVPN Configuration. The configuration varies according to the service type.

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A specified tunnel policy is applied to the EVPN L3VPN instance.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the BD EVPN instance.

      3. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of a specified EVPN instance that works in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the EVPN instance that works in EVPN VPWS mode.

      3. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A specified tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Configuring Traffic Steering into SR-MPLS TE Tunnels Based on the Next Hops of Routes and Color Values

This section describes how to configure traffic steering into SR-MPLS TE tunnels based on the next hops of routes and color values.

Context

Currently, BGP L3VPNv4, BGP L3VPNv6, EVPN L3VPNv4, and EVPN L3VPNv6 services support SR-MPLS TE tunnels with the color attribute. If you want services to be transmitted over SR-MPLS TE tunnels with the color attribute, specify the colored-sr-te parameter in the tunnel select-seq command when configuring a tunnel policy. If the route of the services carries the color attribute, the involved device selects the SR-MPLS TE tunnel whose destination address and color attribute match the next hop and color attribute of the route, respectively. However, if the route does not carry the color attribute, the device cannot select any tunnel carrying this attribute. Instead, it can only select an SR-MPLS TE tunnel that does not carry the color attribute and whose destination address matches the next hop of the route.

Pre-configuration Tasks

Before configuring traffic steering into SR-MPLS TE tunnels, complete the following tasks:

  • Configure an SR-MPLS TE tunnel.

  • Configure BGP L3VPNv4, BGP L3VPNv6, and EVPN L3VPNv4/L3VPNv6 services.

  • Configure a filter, such as an IP prefix list, if you want to restrict the routes that can recurse to SR-MPLS TE tunnels.

Procedure

  1. Configure the color attribute for the specified SR-MPLS TE tunnel.
    1. Run system-view

      The system view is displayed.

    2. Run interface tunnel tunnel-number

      The SR-MPLS TE tunnel interface view is displayed.

    3. Run color color-value

      The color attribute is configured for the SR-MPLS TE tunnel.

    4. Run commit

      The configuration is committed.

    5. Run quit

      Exit the SR-MPLS TE tunnel interface view.

  2. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause for the route-policy. The community attributes of routes can be added or modified only if the routes match specified if-match clauses.

      For configuration details, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The BGP color extended community is configured.

    5. Run commit

      The configuration is committed.

    6. Run the quit command to exit the route-policy view.
  3. Apply the route-policy to add the color attribute to routes.
    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        The BGP import or export route-policy is applied.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        The BGP import or export route-policy is applied.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        The BGP EVPN import or export route-policy is applied.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        The import route-policy is applied to the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        The export route-policy is applied to the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        The import route-policy is applied to the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        The export route-policy is applied to the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

  4. Configure a tunnel policy.

    1. Run system-view

      The system view is displayed.

    2. Run tunnel-policy policy-name

      A tunnel policy is created, and the tunnel policy view is displayed.

    3. (Optional) Run description description-information

      A description is configured for the tunnel policy.

    4. Run tunnel select-seq colored-sr-te load-balance-number load-balance-number unmix

      The tunnel selection sequence and number of tunnels for load balancing are configured.

    5. Run commit

      The configuration is committed.

    6. Run quit

      Exit the tunnel policy view.

  5. Configure service recursion to SR-MPLS TE tunnels.
    • Configure BGP L3VPN service recursion to SR-MPLS TE tunnels.

      For configuration details about BGP L3VPN services, see BGP/MPLS IP VPN Configuration. For configuration details about BGP L3VPNv6 services, see BGP/MPLS IPv6 VPN Configuration.
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        The specified tunnel policy is applied to the VPN instance IPv4/IPv6 address family.

      4. Run commit

        The configuration is committed.

    • Configure EVPN L3VPN service recursion to SR-MPLS TE tunnels.
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        The specified tunnel policy is applied to the EVPN L3VPN instance.

      4. Run commit

        The configuration is committed.

Configuring SBFD for SR-MPLS TE LSP

This section describes how to configure SBFD to detect SR-MPLS TE LSP faults.

Usage Scenario

If SBFD for SR-MPLS TE LSP detects a fault on the primary LSP, traffic is rapidly switched to the backup LSP, which minimizes the impact on traffic.

Pre-configuration Tasks

Before configuring SBFD for SR-MPLS TE LSP, complete the following task:

  • Configure an SR-MPLS TE tunnel.

  • Run the mpls lsr-id lsr-id command to configure an LSR ID and ensure that the route from the peer to the local address specified using lsr-id is reachable.

Procedure

  • Configure an SBFD initiator.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. (Optional) Run destination ipv4 ip-address remote-discriminator discriminator-value

      The mapping between the SBFD reflector IP address and discriminator is configured.

      On the device functioning as an SBFD initiator, if the mapping between the SBFD reflector IP address and discriminator is configured using the destination ipv4 remote-discriminator command, the initiator uses the configured discriminator to negotiate with the reflector in order to establish an SBFD session. If such a mapping is not configured, the SBFD initiator uses the reflector IP address as a discriminator by default to complete the negotiation.

      This step is optional. If it is performed, the value of discriminator-value must be the same as that of unsigned-integer-value in the reflector discriminator command configured on the reflector.

    6. Run quit

      Return to the system view.

    7. Run interface tunnel tunnel-number

      The SR-MPLS TE tunnel interface view is displayed.

    8. Run mpls te bfd enable seamless

      SBFD for SR-MPLS TE LSP is enabled.

      After the configuration is complete, the SBFD initiator automatically establishes an SBFD session destined for the destination address of an SR-MPLS TE tunnel.

    9. (Optional) Run mpls te reverse-lsp binding-sid label label-value

      A label is configured for the SR-MPLS TE tunnel used for BFD return packets. This label must be the binding SID of the tunnel.

      Before running this command, ensure that the mpls te binding-sid label label-value command has been run on the ingress of the reverse SR-MPLS TE tunnel.

      In SBFD for SR-MPLS TE LSP scenarios, SBFD packets from the initiator to the reflector are forwarded along an SR-MPLS TE LSP, whereas the return packets from the reflector to the initiator are forwarded along a multi-hop IP path. To allow the return packets to be forwarded also along an LSP, run the mpls te reverse-lsp binding-sid command.

      Both the forward and reverse SR-MPLS TE tunnels may have a primary LSP and a backup LSP. After the mpls te reverse-lsp binding-sid command is run, the SBFD packet sent by the initiator carries the primary/backup LSP flag. If the SBFD packet sent by the initiator is forwarded through the primary LSP of the forward tunnel, the SBFD return packet sent by the reflector is forwarded through the primary LSP of the reverse tunnel. If the SBFD packet sent by the initiator is forwarded through the backup LSP of the forward tunnel, the SBFD return packet sent by the reflector is forwarded through the backup LSP of the reverse tunnel.

    10. Run commit

      The configuration is committed.

  • Configure the SBFD reflector.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. Run reflector discriminator { unsigned-integer-value | ip-address-value }

      The discriminator of an SBFD reflector is configured.

    6. Run commit

      The configuration is committed.

Verifying the Configuration

After you configure SBFD for SR-MPLS TE LSP, run the display bfd session { all | discriminator discr-value } [ verbose ] command to check information about the SBFD session that monitors an SR-MPLS TE tunnel.

Configuring SBFD for SR-MPLS TE Tunnel

This section describes how to configure SBFD to detect SR-MPLS TE tunnel faults.

Usage Scenario

If SBFD for SR-MPLS TE tunnel detects a fault on the primary tunnel, traffic is rapidly switched to the backup tunnel, which minimizes the impact on traffic.

Pre-configuration Tasks

Before configuring SBFD for SR-MPLS TE tunnel, complete the following task:

  • Configure an SR-MPLS TE tunnel.

  • Run the mpls lsr-id lsr-id command to configure an LSR ID and ensure that the route from the peer to the local address specified using lsr-id is reachable.

Procedure

  • Configure an SBFD initiator.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. (Optional) Run destination ipv4 ip-address remote-discriminator discriminator-value

      The mapping between the SBFD reflector IP address and discriminator is configured.

      On the device functioning as an SBFD initiator, if the mapping between the SBFD reflector IP address and discriminator is configured using the destination ipv4 remote-discriminator command, the initiator uses the configured discriminator to negotiate with the reflector in order to establish an SBFD session. If such a mapping is not configured, the SBFD initiator uses the reflector IP address as a discriminator by default to complete the negotiation.

      This step is optional. If it is performed, the value of discriminator-value must be the same as that of unsigned-integer-value in the reflector discriminator command configured on the reflector.

    6. Run quit

      Return to the system view.

    7. Run interface tunnel tunnel-number

      The SR-MPLS TE tunnel interface view is displayed.

    8. Run mpls te bfd tunnel enable seamless

      SBFD for SR-MPLS TE tunnel is enabled.

      After the configuration is complete, the SBFD initiator automatically establishes an SBFD session destined for the destination address of an SR-MPLS TE tunnel.

    9. (Optional) Configure an IS-IS SBFD source address.

      In IS-IS multi-process scenarios, you can configure source addresses for SBFD sessions in different IS-IS processes.

      By default, MPLS LSR IDs are used to create SBFD sessions. During SBFD deployment, only an LSR ID can be used as the source of an SBFD session, but the source belongs to only one IS-IS process. As a result, in the multi-process scenarios, LSR ID-based host routes must be imported in route import mode. Otherwise, SBFD cannot take effect. If IS-IS process isolation prevents route import, the device must support SBFD session establishment using different sources in different IS-IS processes.

      1. Run quit

        Return to the system view.

      2. Run isis [ process-id ]

        The IS-IS view is displayed.

      3. Run cost-style { wide | compatible | wide-compatible }

        The IS-IS wide metric is configured.

      4. Run segment-routing mpls

        IS-IS SR-MPLS is enabled.

      5. Run segment-routing sbfd source-address ip-address

        An SBFD source address is configured.

    10. Run commit

      The configuration is committed.

  • Configure the SBFD reflector.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      You can set BFD parameters only after running the bfd command to enable global BFD.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. Run reflector discriminator { unsigned-integer-value | ip-address-value }

      The discriminator of an SBFD reflector is configured.

    6. Run commit

      The configuration is committed.

Verifying the Configuration

After you configure SBFD for SR-MPLS TE tunnel, run the display bfd session { all | discriminator discr-value } [ verbose ] command to check information about the SBFD session that monitors an SR-MPLS TE tunnel.

Configuring Static BFD for SR-MPLS TE LSP

Static BFD for SR-MPLS TE LSP can be configured to detect faults on SR-MPLS TE LSPs.

Usage Scenario

BFD detects the connectivity of SR-MPLS TE LSPs. If a BFD session fails to go up through negotiation, an SR-MPLS TE LSP cannot go up. Static BFD for SR-MPLS TE LSP is configured to rapidly switch traffic from a primary LSP to a backup LSP if the primary LSP fails.

Pre-configuration Tasks

Before configuring static BFD for SR-MPLS TE LSP, configure an SR-MPLS TE tunnel.

Procedure

  1. Enable BFD globally.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally, and the BFD view is displayed.

      You can set BFD parameters only after running the bfd command to enable BFD globally.

    3. Run commit

      The configuration is committed.

  2. Configure BFD parameters on the ingress.
    1. Run system-view

      The system view is displayed.

    2. Run bfd sessname-value bind mpls-te interface tunnel-name te-lsp [ backup ] [ one-arm-echo ]

      BFD is configured to monitor the primary or backup LSP that is bound to an SR-MPLS TE tunnel.

      If one-arm-echo is configured, a U-BFD session is established to monitor an LSP bound to the SR-MPLS TE tunnel.

      If the egress does not support BFD for SR-MPLS TE, BFD sessions cannot be created. To address this issue, configure U-BFD.

    3. Run discriminator local discr-value

      A local discriminator of a BFD session is set.

    4. Run discriminator remote discr-value

      A remote discriminator of a BFD session is set.

      This command does not need to be run if a U-BFD session is established.

    5. (Optional) Run min-tx-interval tx-interval

      The minimum interval at which the local device sends BFD packets is changed.

      U-BFD sessions do not support this command.

      Actual local interval at which BFD packets are sent = MAX { Locally configured interval at which BFD packets are sent, Remotely configured interval at which BFD packets are received }

      Actual local interval at which BFD packets are received = MAX { Remotely configured interval at which BFD packets are sent, Locally configured interval at which BFD packets are received }

      Local BFD detection period = Actual local interval at which BFD packets are received x Remotely configured BFD detection multiplier

      For example: If on the local device, the intervals at which BFD packets are sent and received are 200 ms and 300 ms, respectively, and the detection multiplier is 4; on the remote device, the intervals at which BFD packets are sent and received are 100 ms and 600 ms, respectively, and the detection multiplier is 5, then:
      • On the local device, the actual interval for sending BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, the actual interval for receiving BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, and the actual detection period is 1500 ms (300 ms × 5).

      • On the remote device, the actual interval between sending BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, the actual interval between receiving BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, and the actual detection period is 2400 ms (600 ms × 4).

    6. (Optional) Run min-rx-interval rx-interval

      The minimum interval at which BFD packets are received locally is set.

      For a U-BFD session, run the min-echo-rx-interval command to set the minimum interval at which BFD packets are received locally.

    7. (Optional) Run detect-multiplier multiplier

      A local BFD detection multiplier is set.

    8. (Optional) Run bit-error-detection

      Bit-error-triggered SR-MPLS TE LSP switching is enabled.

      SR-MPLS TE LSP establishment does not require protocols. Therefore, an SR-MPLS TE LSP can be established as long as a label stack is delivered. If an SR-MPLS TE LSP encounters bit errors, upper-layer services may be affected. With the bit-error-detection command being executed, if BFD detects bit errors on the primary LSP bound to the SR-MPLS TE tunnel, it instructs the SR-MPLS TE tunnel to switch traffic to the backup LSP, minimizing the impact on services.

    9. Run commit

      The configuration is committed.

  3. Configure BFD parameters on the egress.
    1. Run system-view

      The system view is displayed.

    2. Run bfd sessname-value bind mpls-te interface tunnel-name te-lsp [ backup ] [ one-arm-echo ]

      BFD is configured to monitor the primary or backup LSP that is bound to an SR-MPLS TE tunnel.

      If one-arm-echo is configured, a U-BFD session is established to monitor an LSP bound to the SR-MPLS TE tunnel. A Huawei device at the ingress cannot use BFD for SR-MPLS TE LSP to communicate with a non-Huawei device at the egress. In this situation, no BFD session can be established. To establish a BFD session to monitor an LSP bound to the SR-MPLS TE tunnel, configure U-BFD.

    3. Run discriminator local discr-value

      A local discriminator of a BFD session is set.

    4. Run discriminator remote discr-value

      A remote discriminator of a BFD session is set.

      This command does not need to be run if a U-BFD session is established.

    5. (Optional) Run min-tx-interval tx-interval

      The minimum interval at which the local device sends BFD packets is changed.

      U-BFD sessions do not support this command.

      Actual local interval at which BFD packets are sent = MAX { Locally configured interval at which BFD packets are sent, Remotely configured interval at which BFD packets are received }

      Actual local interval at which BFD packets are received = MAX { Remotely configured interval at which BFD packets are sent, Locally configured interval at which BFD packets are received }

      Local BFD detection period = Actual local interval at which BFD packets are received x Remotely configured BFD detection multiplier

      For example: If on the local device, the intervals at which BFD packets are sent and received are 200 ms and 300 ms, respectively, and the detection multiplier is 4; on the remote device, the intervals at which BFD packets are sent and received are 100 ms and 600 ms, respectively, and the detection multiplier is 5, then:
      • On the local device, the actual interval for sending BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, the actual interval for receiving BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, and the actual detection period is 1500 ms (300 ms × 5).

      • On the remote device, the actual interval between sending BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, the actual interval between receiving BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, and the actual detection period is 2400 ms (600 ms × 4).

    6. (Optional) Run min-rx-interval rx-interval

      The minimum interval at which BFD packets are received locally is set.

      For a U-BFD session, run the min-echo-rx-interval command to set the minimum interval at which BFD packets are received locally.

    7. (Optional) Run detect-multiplier multiplier

      A BFD detection multiplier is set locally.

    8. (Optional) Run bit-error-detection

      Bit-error-triggered SR-MPLS TE LSP switching is enabled.

      SR-MPLS TE LSP establishment does not require protocols. Therefore, an SR-MPLS TE LSP can be established as long as a label stack is delivered. If an SR-MPLS TE LSP encounters bit errors, upper-layer services may be affected. With the bit-error-detection command being executed, if BFD detects bit errors on the primary LSP bound to the SR-MPLS TE tunnel, it instructs the SR-MPLS TE tunnel to switch traffic to the backup LSP, minimizing the impact on services.

    9. Run commit

      The configuration is committed.

Verifying the Configuration

After successfully configuring BFD for SR-MPLS TE LSP, verify the configuration. For example, check whether the BFD session is up.
  • Run the display bfd session mpls-te interface interface-type interface-number te-lsp [ verbose ] command to check information about BFD sessions on the ingress.
  • Run the following commands to check BFD session information about the egress:
    • Run the display bfd session all [ for-lsp ] [ verbose ] command to check the configurations of all BFD sessions.
    • Run the display bfd session static [ for-lsp ] [ verbose ] command to check the configurations of static BFD sessions.
  • Run the following commands to check BFD statistics:
    • Run the display bfd statistics session all [ for-lsp ] command to check statistics about all BFD sessions.
    • Run the display bfd statistics session static [ for-lsp ] command to check statistics about static BFD sessions.
    • Run the display bfd statistics session mpls-te interface interface-type interface-number te-lsp command to check statistics about BFD sessions that monitor LSPs.

Configuring Dynamic BFD for SR-MPLS TE LSP

Dynamic BFD for SR-MPLS TE LSP rapidly detects faults of SR-MPLS TE LSPs, which protects traffic transmitted on SR-MPLS TE LSPs.

Usage Scenario

BFD detects the connectivity of SR-MPLS TE LSPs. Dynamic BFD for SR-MPLS TE LSP is configured to rapidly switch traffic from a primary LSP to a backup LSP if the primary LSP fails. Unlike static BFD for SR-MPLS TE LSP, dynamic BFD for SR-MPLS TE LSP simplifies the configuration and minimizes manual configuration errors.

Dynamic BFD can only monitor a part of an SR-MPLS TE tunnel.

Pre-configuration Tasks

Before configuring dynamic BFD for SR-MPLS TE LSP, complete the following task:

  • Configure an SR-MPLS TE tunnel.

Procedure

  1. Enabling BFD Globally
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally, and the BFD view is displayed.

      You can set BFD parameters only after running the bfd command to enable BFD globally.

    3. Run commit

      The configuration is committed.

  2. Enabling the Ingress to Dynamically Create a BFD Session to Monitor SR-MPLS TE LSPs

    Perform either of the following operations to enable the ingress to dynamically create a BFD Session to monitor SR-MPLS TE LSPs:

    • Globally enable the capability if BFD sessions need to be automatically created for most SR-MPLS TE tunnels on the ingress.

    • Enable the capability on a specific tunnel interface if a BFD session needs to be automatically created for a specific or some SR-MPLS TE tunnels on the ingress.

    Please select the appropriate configuration according to your actual needs.

    • Globally enable the capability.
      1. Run system-view

        The system view is displayed.

      2. Run mpls

        The MPLS view is displayed.

      3. Run mpls te bfd enable [ one-arm-echo ]

        The ingress is configured to automatically create a BFD session for each SR-MPLS TE tunnel.

        After this command is run in the MPLS view, BFD for SR-MPLS TE LSP is enabled on all tunnel interfaces, except the tunnel interfaces on which BFD for SR-MPLS TE LSP is blocked.

        If one-arm-echo is configured, a one-arm BFD echo session is established to monitor an SR-MPLS TE LSP.

        If the egress does not support BFD for SR-MPLS TE, BFD sessions cannot be created. To address this issue, configure one-arm BFD.

      4. (Optional) If some SR-MPLS TE tunnels do not need to be monitored using BFD for SR-MPLS TE LSP, block BFD for SR-MPLS TE LSP on each tunnel interface:
        1. Run quit

          Return to the system view.

        2. Run interface tunnel interface-number

          The SR-MPLS TE tunnel interface view is displayed.

        3. Run the mpls te bfd block

          The tunnel interface is disabled from automatically creating a BFD session to monitor an SR-MPLS TE tunnel.

      5. Run commit

        The configuration is committed.

    • Enable the capability on a tunnel interface.
      • Run system-view

        The system view is displayed.

      • Run interface tunnel interface-number

        The view of the SR-MPLS TE tunnel interface is displayed.

      • Run mpls te bfd enable [ one-arm-echo ]

        The ingress is configured to automatically create a BFD session for the tunnel.

        This command run in the tunnel interface view takes effect only on the tunnel interface.

        If one-arm-echo is configured, a one-arm BFD echo session is established to monitor an SR-MPLS TE LSP. A Huawei device at the ingress cannot use BFD for SR-MPLS TE LSP to communicate with a non-Huawei device at the egress. In this situation, no BFD session can be established. To address this issue, configure one-arm BFD.

      • Run commit

        The configuration is committed.

  3. Enabling the Egress to Passively Create a BFD Session
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      The BFD view is displayed.

    3. Run mpls-passive

      The egress is enabled to create a BFD session passively.

      The egress has to receive an LSP ping request carrying a BFD TLV before creating a BFD session.

    4. Run commit

      The configuration is committed.

  4. (Optional) Adjusting BFD Parameters on the Ingress

    Adjust BFD parameters on the ingress in either of the following modes:

    • Adjust BFD parameters globally. This method is used when BFD parameters for most SR-MPLS TE tunnels need to be adjusted on the ingress.

    • Adjust BFD parameters on a specific tunnel interface. If an SR-MPLS TE tunnel interface needs BFD parameters different from the globally configured ones, adjust BFD parameters on the specific tunnel interface.

    • Effective local interval at which BFD packets are sent = MAX { Locally configured interval at which BFD packets are sent, Remotely configured interval at which BFD packets are received }
    • Effective local interval at which BFD packets are received = MAX { Remotely configured interval at which BFD packets are sent, Locally configured interval at which BFD packets are received }
    • Effective local BFD detection period = Effective local interval at which BFD packets are received x Remotely configured BFD detection multiplier

    On the egress that passively creates a BFD session, the BFD parameters cannot be adjusted, because the default values are the smallest values that can be set on the ingress. Therefore, if BFD for TE is used, the effective BFD detection period on both ends of an SR-MPLS TE tunnel is as follows:

    • Effective detection period on the ingress = Configured interval at which BFD packets are received on the ingress x 3

    • Effective detection period on the egress = Configured interval at which BFD packets are sent on the ingress x Configured detection multiplier on the ingress

    Please select the appropriate configuration according to your actual needs.

    • Adjust BFD parameters globally.
      • Run system-view

        The system view is displayed.

      • Run mpls

        The MPLS view is displayed.

      • Run mpls te bfd { min-tx-interval tx-interval | min-rx-interval rx-interval | detect-multiplier multiplier }*

        The BFD parameters are set.

      • Run commit

        The configuration is committed.

    • Adjust BFD parameters on a specific tunnel interface.
      • Run system-view

        The system view is displayed.

      • Run interface tunnel interface-number

        The tunnel interface view is displayed.

      • Run mpls te bfd { min-tx-interval tx-interval | min-rx-interval rx-interval | detect-multiplier multiplier }*

        The BFD parameters are set.

      • Run commit

        The configuration is committed.

Verifying the Configuration

After the configuration of dynamic BFD for SR-MPLS TE LSP is complete, verify the configurations.
  • Run the display bfd session dynamic [ verbose ] command to check information about BFD sessions on the ingress.
  • Run the display bfd session passive-dynamic [ peer-ip peer-ip remote-discriminator discriminator ] [ verbose ] command to check information about BFD sessions that are passively created on the egress.
  • Run the following commands to check BFD statistics:
    • Run the display bfd statistics command to check all BFD statistics.
    • Run the display bfd statistics session dynamic command to check statistics about dynamic BFD sessions.
  • Run the display mpls bfd session { protocol rsvp-te | outgoing-interface interface-type interface-number } [ verbose ] command to check information about BFD sessions for MPLS.

Configuring Static BFD for SR-MPLS TE Tunnel

This section describes how to configure static BFD for SR-MPLS TE to detect SR-MPLS TE tunnel faults.

Usage Scenario

BFD can be used to monitor to SR-MPLS TE tunnels. If the primary tunnel fails, BFD instructs applications such as VPN FRR to quickly switch traffic, minimizing the impact on services.

Pre-configuration Tasks

Before configuring static BFD for SR-MPLS TE, configure SR-MPLS TE tunnels.

Procedure

  1. Enable BFD globally.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally, and the BFD view is displayed.

      You can set BFD parameters only after running the bfd command to enable BFD globally.

    3. Run commit

      The configuration is committed.

  2. Set ingress BFD parameters.
    1. Run system-view

      The system view is displayed.

    2. Run bfd sessname-value bind mpls-te interface tunnel-name

      BFD is configured to monitor an SR-MPLS TE tunnel.

    3. Run discriminator local discr-value

      A local discriminator is configured for the BFD session.

    4. Run discriminator remote discr-value

      A remote discriminator is configured for the BFD session.

      A U-BFD session does not require any remote discriminator.

    5. (Optional) Run min-tx-interval tx-interval

      The minimum interval at which the local device sends BFD packets is changed.

      This command cannot be run for a one-arm BFD session.

      Actual local interval at which BFD packets are sent = MAX { Locally configured interval at which BFD packets are sent, Remotely configured interval at which BFD packets are received }

      Actual local interval at which BFD packets are received = MAX { Remotely configured interval at which BFD packets are sent, Locally configured interval at which BFD packets are received }

      Local BFD detection period = Actual local interval at which BFD packets are received x Remotely configured BFD detection multiplier

      For example: If on the local device, the intervals at which BFD packets are sent and received are 200 ms and 300 ms, respectively, and the detection multiplier is 4; on the remote device, the intervals at which BFD packets are sent and received are 100 ms and 600 ms, respectively, and the detection multiplier is 5, then:
      • On the local device, the actual interval for sending BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, the actual interval for receiving BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, and the actual detection period is 1500 ms (300 ms × 5).

      • On the remote device, the actual interval between sending BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, the actual interval between receiving BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, and the actual detection period is 2400 ms (600 ms × 4).

    6. (Optional) Run min-rx-interval rx-interval

      The minimum interval at which the local device receives BFD packets is changed.

      For a U-BFD session, run the min-echo-rx-interval command to set the minimum interval at which the local device receives BFD packets.

    7. (Optional) Run detect-multiplier multiplier

      The local BFD detection multiplier is changed.

    8. Run commit

      The configuration is committed.

  3. Set egress BFD parameters.
    1. Run system-view

      The system view is displayed.

    2. Run bfd sessname-value bind mpls-te interface tunnel-name

      BFD is configured to monitor an SR-MPLS TE tunnel.

    3. Run discriminator local discr-value

      A local discriminator is configured for the BFD session.

    4. Run discriminator remote discr-value

      A remote discriminator is configured for the BFD session.

      A U-BFD session does not require any remote discriminator.

    5. (Optional) Run min-tx-interval tx-interval

      The minimum interval at which the local device sends BFD packets is changed.

      U-BFD sessions do not support this command.

      If the reverse link is an IP link, this command cannot be run.

      Actual local interval at which BFD packets are sent = MAX { Locally configured interval at which BFD packets are sent, Remotely configured interval at which BFD packets are received }

      Actual local interval at which BFD packets are received = MAX { Remotely configured interval at which BFD packets are sent, Locally configured interval at which BFD packets are received }

      Local BFD detection period = Actual local interval at which BFD packets are received x Remotely configured BFD detection multiplier

      For example: If on the local device, the intervals at which BFD packets are sent and received are 200 ms and 300 ms, respectively, and the detection multiplier is 4; on the remote device, the intervals at which BFD packets are sent and received are 100 ms and 600 ms, respectively, and the detection multiplier is 5, then:
      • On the local device, the actual interval for sending BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, the actual interval for receiving BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, and the actual detection period is 1500 ms (300 ms × 5).

      • On the remote device, the actual interval between sending BFD packets is 300 ms calculated using the formula MAX {100 ms, 300 ms}, the actual interval between receiving BFD packets is 600 ms calculated using the formula MAX {200 ms, 600 ms}, and the actual detection period is 2400 ms (600 ms × 4).

    6. (Optional) Run min-rx-interval rx-interval

      The minimum interval at which the local device receives BFD packets is changed.

      For a U-BFD session, run the min-echo-rx-interval command to set the minimum interval at which the local device receives BFD packets.

    7. (Optional) Run detect-multiplier multiplier

      The local BFD detection multiplier is changed.

    8. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring static BFD for SR-MPLS TE, check the configurations.
  • Run the display bfd session mpls-te interface interface-type interface-number [ verbose ] command to check BFD session information on the tunnel ingress.
  • Check BFD session information on the tunnel egress.
    • To check all BFD sessions' information, run the display bfd session all [ for-te ] [ verbose ] command.
    • To check static BFD sessions' information, run the display bfd session static [ for-te ] [ verbose ] command.
  • Check BFD statistics.
    • To check statistics about all BFD sessions, run the display bfd statistics session all [ for-te ] command.
    • To check statistics about static BFD sessions, run the display bfd statistics session static [ for-te ] command.
    • To check statistics about BFD sessions, run the display bfd statistics session mpls-te interface interface-type interface-number command.

Configuring One-Arm BFD for E2E SR-MPLS TE Tunnel

One-arm BFD for E2E SR-MPLS TE tunnel quickly detects faults on inter-AS E2E SR-MPLS TE tunnels and protects traffic on the E2E SR-MPLS TE tunnels.

Usage Scenario

If one-arm BFD for inter-AS E2E SR-MPLS TE tunnel detects a fault on the primary tunnel, a protection application, for example, VPN FRR, rapidly switches traffic, which minimizes the impact on traffic.

With one-arm BFD for E2E SR-MPLS TE tunnel enabled, if the reflector can successfully recurse packets to the E2E SR-MPLS TE tunnel using the IP address of the initiator, the reflector forwards the packets through the E2E SR-MPLS TE tunnel. Otherwise, the reflector forwards the packets over IP routes.

Pre-configuration Tasks

Before configuring one-arm BFD for E2E SR-MPLS TE tunnel, configure an inter-AS E2E SR-MPLS TE tunnel.

Procedure

  1. Enable BFD globally.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled.

    3. Run commit

      The configuration is committed.

  2. Enable the ingress to dynamically create a BFD session to monitor E2E SR-MPLS TE tunnels.

    Perform either of the following operations:

    • Globally enable the capability if BFD sessions need to be automatically created for most E2E SR-MPLS TE tunnels on the ingress.

    • Enable the capability on a specific tunnel interface if a BFD session needs to be automatically created for some E2E SR-MPLS TE tunnels on the ingress.

    Run the following commands as needed.

    • Enable the capability globally.
      1. Run system-view

        The system view is displayed.

      2. Run mpls

        The MPLS view is displayed.

      3. Run mpls te bfd tunnel enable one-arm-echo

        One-arm BFD for E2E SR-MPLS TE tunnel is enabled.

      4. Run quit

        Return to the system view.

      5. Run interface tunnel interface-number

        The view of the E2E SR-MPLS TE tunnel interface is displayed.

      6. Run mpls te reverse-lsp binding-sid label label-value

        A binding SID is set for a reverse LSP in the E2E SR-MPLS TE tunnel.

        Ensure that the mpls te binding-sid label label-value command has been run on the ingress of the reverse LSP.

      7. (Optional) Run mpls te bfd block

        The capability of automatically creating BFD sessions for the E2E SR-MPLS TE tunnel is blocked.

        If some SR-MPLS TE tunnels do not need to be monitored using BFD for E2E SR-MPLS TE tunnel, block this capability on each tunnel interface:

      8. Run commit

        The configuration is committed.

    • Enable the capability on a tunnel interface.
      • Run system-view

        The system view is displayed.

      • Run interface tunnel interface-number

        The view of the E2E SR-MPLS TE tunnel interface is displayed.

      • Run mpls te bfd tunnel enable one-arm-echo

        One-arm BFD for E2E SR-MPLS TE tunnel is enabled.

        This command run in the tunnel interface view takes effect only on the tunnel interface.

      • Run mpls te reverse-lsp binding-sid label label-value

        A binding SID is set for a reverse LSP in the E2E SR-MPLS TE tunnel.

        Ensure that the mpls te binding-sid label label-value command has been run on the ingress of the reverse LSP.

      • Run commit

        The configuration is committed.

  3. (Optional) Adjust BFD parameters on the ingress.

    Adjust BFD parameters on the ingress in either of the following modes:

    • Adjust BFD parameters globally. This method is used when BFD parameters for most E2E SR-MPLS TE tunnels need to be adjusted on the ingress.

    • Adjust BFD parameters on a specific tunnel interface. If an E2E SR-MPLS TE tunnel interface needs BFD parameters different from the globally configured ones, adjust BFD parameters on the specific tunnel interface.

    In one-arm BFD for E2E SR-MPLS TE mode, BFD does not need to be enabled on the peer, and the min-tx-interval tx-interval parameter of the local end does not take effect. Therefore, the actual detection period of the ingress equals the configured interval at which BFD packets are received on the ingress multiplied by the detection multiplier configured on the ingress.

    Run the following commands as needed.

    • Adjust BFD parameters globally.
      • Run system-view

        The system view is displayed.

      • Run mpls

        The MPLS view is displayed.

      • Run mpls te bfd tunnel { min-rx-interval rx-interval | detect-multiplier multiplier } *

        BFD parameters are set.

      • Run commit

        The configuration is committed.

    • Adjust BFD parameters on a specific tunnel interface.
      • Run system-view

        The system view is displayed.

      • Run interface tunnel interface-number

        The tunnel interface view is displayed.

      • Run mpls te bfd tunnel { min-rx-interval rx-interval | detect-multiplier multiplier } *

        BFD parameters are set.

      • Run commit

        The configuration is committed.

Verifying the Configuration

After successfully configuring one-arm BFD for E2E SR-MPLS TE tunnel, verify the configurations.
  • Run the display bfd session dynamic [ verbose ] command to check information about BFD sessions on the ingress.
  • Run the following commands to check BFD statistics:
    • Run the display bfd statistics command to check all BFD statistics.
    • Run the display bfd statistics session dynamic command to check statistics about dynamic BFD sessions.

Configuring One-Arm BFD for E2E SR-MPLS TE LSP

One-arm BFD for E2E SR-MPLS TE LSP quickly detects faults on inter-AS E2E SR-MPLS TE LSPs and protects traffic on E2E SR-MPLS TE LSPs.

Usage Scenario

One-arm BFD monitors inter-AS E2E SR-MPLS TE LSPs. If a specified transit node on a tunnel fails, traffic is switched to the hot-standby LSP, which reduces the impact on services.

With one-arm BFD for E2E SR-MPLS TE LSP enabled, if the reflector can successfully recurse packets to the E2E SR-MPLS TE LSP using the IP address of the initiator, the reflector forwards the packets through the E2E SR-MPLS TE LSP. Otherwise, the reflector forwards the packets over IP routes.

Pre-configuration Tasks

Before configuring one-arm BFD for E2E SR-MPLS TE LSP, configure an inter-AS E2E SR-MPLS TE tunnel.

Procedure

  1. Enable global BFD.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled.

    3. Run commit

      The configuration is committed.

  2. Enable the ingress to dynamically create a one-arm BFD session to monitor E2E SR-MPLS TE LSPs.
    1. Run system-view

      The system view is displayed.

    2. Run interface tunnel interface-number

      The view of the E2E SR-MPLS TE tunnel interface is displayed.

    3. Run mpls te bfd enable one-arm-echo [ primary ]

      Automatic creation of a one-arm BFD session to monitor E2E SR-MPLS TE LSPs is triggered.

    4. Run mpls te reverse-lsp binding-sid label label-value

      A binding SID is specified for a reverse LSP in the E2E SR-MPLS TE tunnel.

      Ensure that the mpls te binding-sid label label-value command has been run on the ingress of the reverse LSP.

    5. Run commit

      The configuration is committed.

  3. (Optional) Adjust BFD parameters on the ingress.

    Adjust BFD parameters on the ingress in either of the following modes:

    • Adjust BFD parameters globally. This method is used when BFD parameters for most E2E SR-MPLS TE tunnels need to be adjusted on the ingress.

    • Adjust BFD parameters on a specific tunnel interface. If an E2E SR-MPLS TE tunnel interface needs BFD parameters different from the globally configured ones, adjust BFD parameters on the specific tunnel interface.

    In one-arm BFD for E2E SR-MPLS TE mode, BFD does not need to be enabled on the peer, and the min-tx-interval tx-interval parameter of the local end does not take effect. Therefore, the actual detection period of the ingress equals the configured interval at which BFD packets are received on the ingress multiplied by the detection multiplier configured on the ingress.

    Run the following commands as needed.

    • Adjust BFD parameters globally.
      • Run system-view

        The system view is displayed.

      • Run mpls

        The MPLS view is displayed.

      • Run mpls te bfd { min-rx-interval rx-interval | detect-multiplier multiplier } *

        BFD parameters are set.

      • Run commit

        The configuration is committed.

    • Adjust BFD parameters on a specific tunnel interface.
      • Run system-view

        The system view is displayed.

      • Run interface tunnel interface-number

        The tunnel interface view is displayed.

      • Run mpls te bfd { min-rx-interval rx-interval | detect-multiplier multiplier } *

        BFD parameters are set.

      • Run commit

        The configuration is committed.

Verifying the Configuration

After successfully configuring one-arm BFD for E2E SR-MPLS TE LSP, verify the configurations.
  • Run the display bfd session dynamic [ verbose ] command to check information about BFD sessions on the ingress.
  • Run the following commands to check BFD statistics:
    • Run the display bfd statistics command to check all BFD statistics.
    • Run the display bfd statistics session dynamic command to check statistics about dynamic BFD sessions.

Configuring an SR-MPLS TE Policy (Manual Configuration)

SR-MPLS TE Policy is a tunneling technology developed based on SR.

Usage Scenario

An SR-MPLS TE Policy is a set of candidate paths consisting of one or more segment lists, that is, segment ID (SID) lists. Each SID list identifies an end-to-end path from the source to the destination, instructing a device to forward traffic through the path rather than the shortest path computed by an IGP. The header of a packet steered into an SR-MPLS TE Policy is augmented with an ordered list of segments associated with that SR-MPLS TE Policy, so that other devices on the network can execute the instructions encapsulated into the list.

You can configure an SR-MPLS TE Policy through CLI or NETCONF. For a manually configured SR-MPLS TE Policy, information, such as the endpoint and color attributes, the preference values of candidate paths, and segment lists, must be configured. Moreover, the preference values must be unique. The first-hop label of a segment list can be a node, adjacency, BGP EPE, parallel, or anycast SID, but cannot be any binding SID. Ensure that the first-hop label is a local incoming label, so that the forwarding plane can use this label to search the local forwarding table for the corresponding forwarding entry.

Pre-configuration Tasks

Before configuring an SR-MPLS TE Policy, configure an IGP to ensure that all nodes can communicate with each other at the network layer.

Configuring IGP SR

This section describes how to configure IGP SR.

Context

An SR-MPLS TE Policy may contain multiple candidate paths that are defined using segment lists. Because adjacency and node SIDs are required for SR-MPLS TE Policy configuration, you need to configure the IGP SR function. Node SIDs need to be configured manually, whereas adjacency SIDs can either be generated dynamically by an IGP or be configured manually.
  • In scenarios where SR-MPLS TE Policies are configured manually, if dynamic adjacency SIDs are used, the adjacency SIDs may change after an IGP restart. To keep the SR-MPLS TE Policies up, you must adjust them manually. The adjustment workload is heavy if a large number of SR-MPLS TE Policies are configured manually. Therefore, you are advised to configure adjacency SIDs manually and not to use adjacency SIDs generated dynamically.
  • In scenarios where SR-MPLS TE Policies are delivered by a controller dynamically, you are also advised to configure adjacency SIDs manually. Although the controller can use BGP-LS to detect adjacency SID changes, the adjacency SIDs dynamically generated by an IGP change randomly, causing inconvenience in routine maintenance and troubleshooting.

Procedure

  1. Enable MPLS.
    1. Run system-view

      The system view is displayed.

    2. Run mpls lsr-id lsr-id

      An LSR ID is configured for the device.

      Note the following during LSR ID configuration:
      • Configuring LSR IDs is the prerequisite for all MPLS configurations.

      • LSRs do not have default IDs. LSR IDs must be manually configured.

      • Using a loopback interface address as the LSR ID is recommended for an LSR.

    3. (Optional) Run mpls

      MPLS is enabled, and the MPLS view is displayed.

      SR-MPLS uses the MPLS forwarding plane, and therefore requires MPLS to be enabled. However, the MPLS capability is automatically enabled on an interface when any of the following conditions is met, without the need to perform this step:

      • MPLS is enabled on the interface.
      • Segment Routing is enabled for IGP, and IGP is enabled on the interface.
      • A static adjacency label is configured for the interface in the Segment Routing view.

    4. Run commit

      The configuration is committed.

    5. Run quit

      Return to the system view.

  2. Enable SR globally.
    1. Run segment-routing

      Segment Routing is enabled globally, and the Segment Routing view is displayed.

    2. (Optional) Run local-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRLB range is configured.

      If the system displays a message indicating that the SRLB is used, run the display segment-routing local-block command to view the SRLB ranges that can be set. Alternatively, delete unwanted configurations related to the used label to release the label space.

      When the ignore-conflict parameter is specified in the local-block command, if a label conflict occurs, the configuration is forcibly delivered but does not occupy label resources. This function is mainly used for pre-deployment, and the configuration takes effect after the device is restarted.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  3. Configure IGP SR.
    • If the IGP is IS-IS, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      In scenarios where SR-MPLS TE Policies need to be manually configured, you are advised to use manually configured adjacency SIDs. In scenarios where a controller dynamically delivers SR-MPLS TE Policies, you are advised to preferentially use manually configured adjacency SIDs. However, you can also use IS-IS to dynamically generate adjacency SIDs.

    • If the IGP is OSPF, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      Because OSPF cannot advertise manually configured adjacency SIDs, only dynamically generated adjacency SIDs can be used.

Configuring an SR-MPLS TE Policy

SR-MPLS TE Policy is a tunneling technology developed based on SR.

Context

SR-MPLS TE Policies are used to direct traffic to traverse an SR-MPLS TE network. Each SR-MPLS TE Policy can have multiple candidate paths with different preferences. From the valid candidate paths, the one with the highest preference is selected as the primary path, and the one with the second highest preference is selected as the backup path.

Procedure

  • Configure a segment list.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled globally and the Segment Routing view is displayed.

    3. Run segment-list (Segment Routing view) list-name

      A segment list is created for an SR-MPLS TE candidate path and the segment list view is displayed.

    4. Run index index sid label label

      A next-hop SID is configured for the segment list.

      You can run the command multiple times to configure multiple SIDs. The system generates a segment list with a label stack containing SIDs that are placed by index in ascending order. If a candidate path in the SR-MPLS TE Policy is preferentially selected, traffic is forwarded using the segment list of the candidate path. A maximum of 10 SIDs can be configured for each segment list.

    5. Run commit

      The configuration is committed.

  • Configure an SR-MPLS TE Policy.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled globally and the Segment Routing view is displayed.

    3. (Optional) Run sr-te-policy bgp-ls enable

      The SR-MPLS TE Policy is enabled to report information to BGP-LS.

      After the sr-te-policy bgp-ls enable command is run, the system sends the path information to BGP-LS with the candidate paths of the SR-MPLS TE Policy as the granularity.

    4. Run sr-te policy policy-name [ endpoint ipv4-address color color-value ]

      An SR-MPLS TE Policy is created, the endpoint and color value are configured for the SR-MPLS TE Policy, and the SR-MPLS TE Policy view is displayed.

    5. (Optional) Run binding-sid label-value

      A binding SID is configured for the SR-MPLS TE Policy.

      The value of label-value needs to be within the scope defined by the local-block begin-value end-value command.

    6. (Optional) Run mtu mtu

      An MTU is configured for the SR-MPLS TE Policy.

    7. (Optional) Run diffserv-mode { pipe service-class service-color | uniform }

      A DiffServ mode is configured for the SR-MPLS TE Policy to implement end-to-end QoS guarantee.

    8. Run candidate-path preference preference

      A candidate path and its preference are configured for the SR-MPLS TE Policy.

      Each SR-MPLS TE Policy supports multiple candidate paths. A larger preference value indicates a higher preference. The candidate path with the highest preference takes effect.

    9. Run segment-list (candidate path view) list-name [ weight weight-value ]

      A segment list is configured to reference the SR-MPLS TE candidate path.

      The segment list must have been created using the segment-list (Segment Routing view) command.

      You can use the weight weight-value parameter to configure a weight for the segment list. If the weight configured for a segment list of a candidate path is less than the average weight, the segment list does not forward traffic. The average weight is calculated using the following formula: Average weight = Sum of weights of all segment lists of a candidate path/Maximum number of channels supported for load balancing For example, if the maximum number of channels supported for load balancing is M and a candidate path has a total of N segment lists, the average weight of the N segment lists is calculated as follows: (Weight 1 + Weight 2 + ... + Weight N)/M = X. In this example, X represents the average weight, and segment lists whose weights are less than X do not forward traffic.

    10. Run commit

      The configuration is committed.

(Optional) Configuring Cost Values for SR-MPLS TE Policies

Configure cost values for SR-MPLS TE Policies so that the ingress can select the optimal SR-MPLS TE Policy based on the values.

Context

By default, the cost (or metric) values of SR-MPLS TE Policies are irrelevant to IGP cost values — the cost values of the IGP routes on which the SR-MPLS TE Policies depend. The default cost value of an SR-MPLS TE Policy is 0. As a result, the ingress cannot perform cost-based SR-MPLS TE Policy selection. To avoid this problem, you can either configure SR-MPLS TE Policies to inherit IGP cost values or directly configure absolute cost values for the SR-MPLS TE Policies.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally, and the SR view is displayed.

  3. Run sr-te policy policy-name [ endpoint ipv4-address color color-value ]

    An SR-MPLS TE Policy with the specified endpoint and color is created, and the SR-MPLS TE Policy view is displayed.

  4. Run metric inherit-igp

    The SR-MPLS TE Policy is configured to inherit the IGP cost.

  5. Run igp metric absolute absoluteValue

    An absolute cost is configured for the SR-MPLS TE Policy.

    When an SR-MPLS TE Policy is configured to inherit the IGP cost, the cost of the SR-MPLS TE Policy is reported to the tunnel management module according to the following rules:

    • If the igp metric absolute command is run to configure an absolute IGP cost, the configured cost is reported as the cost of the SR-MPLS TE Policy.
    • If the igp metric absolute command is not run, the cost of the route that has a 32-bit mask and is destined for the endpoint of the SR-MPLS TE Policy is subscribed to and reported as the cost of the SR-MPLS TE Policy. In this case, the route cost may be different from the actual path cost of the SR-MPLS TE Policy.
    • If the metric inherit-igp command is not run, the cost of the SR-MPLS TE Policy reported to the tunnel management module is 0.

  6. Run commit

    The configuration is committed.

Configuring a BGP Extended Community

This section describes how to add a BGP extended community, that is, the Color Extended Community, to a route through a route-policy, enabling the route to be recursed to an SR-MPLS TE Policy based on the color value and next-hop address in the route.

Context

The route coloring process is as follows:

  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Procedure

  1. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause as a route-policy filter criterion. You can add or modify the Color Extended Community only for a route-policy that meets the filter criterion.

      For details about the configuration, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The Color extended community is configured.

    5. Run commit

      The configuration is committed.

  2. Apply the route-policy.
    • Apply the route-policy to a BGP IPv4 unicast peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family unicast

        The IPv4 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP4+ 6PE peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP4+ 6PE peer is created.

      4. Run ipv6-family unicast

        The IPv6 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP4+ 6PE import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP EVPN import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

Configuring Traffic Steering

This section describes how to configure traffic steering to recurse a route to an SR-MPLS TE Policy so that traffic can be forwarded through the path specified by the SR-MPLS TE Policy.

Usage Scenario

After an SR-MPLS TE Policy is configured, traffic needs to be steered into the policy for forwarding. This process is called traffic steering. Currently, SR-MPLS TE Policies can be used for various routes and services, such as BGP and static routes as well as BGP4+ 6PE, BGP L3VPN and EVPN services. This section describes how to use tunnel policies to recurse services to SR-MPLS TE Policies.

EVPN VPWS and EVPN VPLS packets do not support DSCP-based traffic steering because they do not carry DSCP values.

Pre-configuration Tasks

Before configuring traffic steering, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, or EVPN services correctly.

  • Configure an IP prefix list and a tunnel policy if you want to restrict the route to be recursed to the specified SR-MPLS TE Policy.

Procedure

  1. Configure a tunnel policy.

    Use either of the following procedures based on the traffic steering mode you select:

    • Color-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      4. Run tunnel select-seq sr-te-policy load-balance-number load-balance-number unmix

        The tunnel selection sequence and number of tunnels for load balancing are configured.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Exit the tunnel policy view.

    • DSCP-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run segment-routing

        The Segment Routing view is displayed.

      3. Run sr-te-policy group group-value

        An SR-MPLS TE Policy group is created and the SR-MPLS TE Policy group view is displayed.

      4. Run endpoint ipv4-address

        An endpoint is configured for the SR-MPLS TE Policy group.

      5. Run color color-value match dscp { ipv4 | ipv6 } { { dscp-value1 [ to dscp-value2 ] } &<1-64> | default }

        The mapping between the color values of SR-MPLS TE Policies in an SR-MPLS TE Policy group and the DSCP values of packets is configured.

        Each SR-MPLS TE Policy in an SR-MPLS TE Policy group has its own color attribute. You can run the color match dscp command to configure the mapping between color and DSCP values, thereby associating DSCP values, color values, and SR-MPLS TE Policies in an SR-MPLS TE Policy group. IP packets can then be steered into the specified SR-MPLS TE Policy based on their DSCP values.

        When using the color match dscp command, pay attention to the following points:
        1. You can configure a separate color-DSCP mapping for both the IPv4 address family and IPv6 address family. In the same address family (IPv4/IPv6), each DSCP value can be associated with only one SR-MPLS TE Policy. Furthermore, the association can be performed for an SR-MPLS TE Policy only if this policy is up.

        2. The color color-value match dscp { ipv4 | ipv6 } default command can be used to specify a default SR-MPLS TE Policy in an address family (IPv4/IPv6). If a DSCP value is not associated with any SR-MPLS TE Policy in an SR-MPLS TE Policy group, the packets carrying this DSCP value are forwarded over the default SR-MPLS TE Policy. Each address family (IPv4/IPv6) in an SR-MPLS TE Policy group can have only one default SR-MPLS TE Policy.

        3. In scenarios where no default SR-MPLS TE Policy is specified for an address family (IPv4/IPv6) in an SR-MPLS TE Policy group:
          1. If the mapping between color and DSCP values is configured for the group but only some of the DSCP values are associated with SR-MPLS TE Policies, the packets carrying DSCP values that are not associated with SR-MPLS TE Policies are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the address family.
          2. If no DSCP value is associated with an SR-MPLS TE Policy in the group (for example, the mapping between color and DSCP values is not configured, or DSCP values are not successfully associated with SR-MPLS TE Policies after the mapping is configured), the default SR-MPLS TE Policy in the other address family (IPv4/IPv6) is used to forward packets. If no default SR-MPLS TE Policy is specified for the other address family, packets are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the local address family.
      6. Run quit

        Return to the SR view.

      7. Run quit

        Return to the system view.

      8. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      9. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      10. Run tunnel binding destination dest-ip-address sr-te-policy group sr-te-policy-group-id [ ignore-destination-check ] [ down-switch ]

        A tunnel binding policy is configured to bind the destination IP address and SR-MPLS TE Policy group.

        The ignore-destination-check keyword is used to disable the function to check the consistency between the destination IP address specified using destination dest-ip-address and the endpoint of the corresponding SR-MPLS TE Policy.

      11. Run commit

        The configuration is committed.

      12. Run quit

        Exit the tunnel policy view.

  2. Configure a route or service to be recursed to an SR-MPLS TE Policy.
    • Configure a non-labeled public BGP route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a non-labeled public BGP route, see Configuring Basic BGP Functions.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a non-labeled public network route to an SR-MPLS TE Policy is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a static route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a static route, see Configuring IPv4 Static Routes.

      The color attribute cannot be added to static routes. Therefore, static routes support only DSCP-based traffic steering to SR-MPLS TE Policies, not color-based traffic steering to SR-MPLS TE Policies.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a static route to an SR-MPLS TE Policy for MPLS forwarding is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a BGP L3VPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPN service, see Configuring a Basic BGP/MPLS IP VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv4 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPN service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP L3VPNv6 service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPNv6 service, see Configuring a Basic BGP/MPLS IPv6 VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv6 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPNv6 service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP4+ 6PE service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        A 6PE peer is enabled.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

    • Configure an EVPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure an EVPN service, see Configuring EVPN VPLS over MPLS (BD EVPN Instance).

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A tunnel policy is applied to the EVPN L3VPN instance.

      4. (Optional) Run default-color color-value evpn

        The default color value is specified for the EVPN L3VPN service to recurse to an SR-MPLS TE Policy.

        If a remote EVPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the BD EVPN instance.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of the EVPN instance that works in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the EVPN instance that works in EVPN VPWS mode.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Verifying the SR-MPLS TE Policy Configuration

After configuring SR-MPLS TE Policies, verify the configuration.

Prerequisites

SR-MPLS TE Policies have been configured.

Procedure

  1. Run the display sr-te policy [ endpoint ipv4-address color color-value | policy-name name-value ] command to check SR-MPLS TE Policy details.
  2. Run the display sr-te policy statistics command to check SR-MPLS TE Policy statistics.
  3. Run the display sr-te policy status { endpoint ipv4-address color color-value | policy-name name-value } command to check SR-MPLS TE Policy status to determine the reason why a specified SR-MPLS TE Policy cannot go up.
  4. Run the display sr-te policy last-down-reason [ endpoint ipv4-address color color-value | policy-name name-value ] command to check records about events where SR-MPLS TE Policies or segment lists in SR-MPLS TE Policies go down.
  5. Run the display ip vpn-instance vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about the iterations of the routes that match the nexthop under each address family in the current VPN instance.
  6. Run the display evpn vpn-instance [ name vpn-instance-name ] tunnel-info command to displays information about the tunnel associated with the EVPN instance.
  7. Run the display evpn vpn-instance name vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about a tunnel associated with an EVPN with a specified next-hop address.

Configuring an SR-MPLS TE Policy (Dynamic Delivery by a Controller)

SR-MPLS TE Policy is a tunneling technology developed based on SR.

Usage Scenario

An SR-MPLS TE Policy can either be manually configured on a forwarder through CLI or NETCONF, or be delivered to a forwarder after being dynamically generated by a protocol, such as BGP, on a controller. The dynamic mode facilitates network deployment.

The candidate paths in an SR-MPLS TE Policy are identified using <Protocol-Origin, originator, discriminator>. If SR-MPLS TE Policies generated in both modes exist, the forwarder selects an SR-MPLS TE Policy based on the following rules in descending order:

  • Protocol-Origin: The default value of Protocol-Origin is 20 for a BGP-delivered SR-MPLS TE Policy and is 30 for a manually configured SR-MPLS TE Policy. A larger value indicates a higher preference.

  • <ASN, node-address> tuple: originator of an SR-MPLS TE Policy's primary path. ASN indicates an AS number, and node-address indicates the address of the node where the SR-MPLS TE Policy is generated.
    • The ASN and node-address values of a manually configured SR-MPLS TE Policy are fixed at 0 and 0.0.0.0, respectively.
    • For an SR-MPLS TE Policy delivered by a controller through BGP, the ASN and node-address values are as follows:
      1. If a BGP SR-MPLS TE Policy peer relationship is established between the headend and controller, the ASN and node-address values are the AS number of the controller and the BGP router ID, respectively.
      2. If the headend receives an SR-MPLS TE Policy through a multi-segment EBGP peer relationship, the ASN and node-address values are the AS number and BGP router ID of the original EBGP peer, respectively.
      3. If the controller sends an SR-MPLS TE Policy to a reflector and the headend receives the SR-MPLS TE Policy from the reflector through the IBGP peer relationship, the ASN and node-address values are the AS number and BGP originator ID of the controller, respectively.
    For both ASN and node-address, a smaller value indicates a higher preference.
  • Distinguisher: A larger value indicates a higher preference. For a manually configured SR-MPLS TE Policy, the value of Distinguisher is the same as the preference value.

The process for a controller to dynamically generate and deliver an SR-MPLS TE Policy to a forwarder is as follows:

  1. The controller collects information, such as network topology and label information, through BGP-LS.

  2. The controller and headend forwarder establish an IPv4 SR-MPLS TE Policy address family-specific BGP peer relationship.

  3. The controller computes an SR-MPLS TE Policy and delivers it to the headend forwarder through the BGP peer relationship. The headend forwarder then generates SR-MPLS TE Policy entries.

Pre-configuration Tasks

Before configuring an SR-MPLS TE Policy, complete the following tasks:

  • Configure an IGP to ensure that all nodes can communicate with each other at the network layer.

  • Configure a routing protocol between the controller and forwarder to ensure that they can communicate with each other.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Configuring IGP SR

This section describes how to configure IGP SR.

Context

An SR-MPLS TE Policy may contain multiple candidate paths that are defined using segment lists. Because adjacency and node SIDs are required for SR-MPLS TE Policy configuration, you need to configure the IGP SR function. Node SIDs need to be configured manually, whereas adjacency SIDs can either be generated dynamically by an IGP or be configured manually.
  • In scenarios where SR-MPLS TE Policies are configured manually, if dynamic adjacency SIDs are used, the adjacency SIDs may change after an IGP restart. To keep the SR-MPLS TE Policies up, you must adjust them manually. The adjustment workload is heavy if a large number of SR-MPLS TE Policies are configured manually. Therefore, you are advised to configure adjacency SIDs manually and not to use adjacency SIDs generated dynamically.
  • In scenarios where SR-MPLS TE Policies are delivered by a controller dynamically, you are also advised to configure adjacency SIDs manually. Although the controller can use BGP-LS to detect adjacency SID changes, the adjacency SIDs dynamically generated by an IGP change randomly, causing inconvenience in routine maintenance and troubleshooting.

Procedure

  1. Enable MPLS.
    1. Run system-view

      The system view is displayed.

    2. Run mpls lsr-id lsr-id

      An LSR ID is configured for the device.

      Note the following during LSR ID configuration:
      • Configuring LSR IDs is the prerequisite for all MPLS configurations.

      • LSRs do not have default IDs. LSR IDs must be manually configured.

      • Using a loopback interface address as the LSR ID is recommended for an LSR.

    3. (Optional) Run mpls

      MPLS is enabled, and the MPLS view is displayed.

      SR-MPLS uses the MPLS forwarding plane, and therefore requires MPLS to be enabled. However, the MPLS capability is automatically enabled on an interface when any of the following conditions is met, without the need to perform this step:

      • MPLS is enabled on the interface.
      • Segment Routing is enabled for IGP, and IGP is enabled on the interface.
      • A static adjacency label is configured for the interface in the Segment Routing view.

    4. Run commit

      The configuration is committed.

    5. Run quit

      Return to the system view.

  2. Enable SR globally.
    1. Run segment-routing

      Segment Routing is enabled globally, and the Segment Routing view is displayed.

    2. (Optional) Run local-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRLB range is configured.

      If the system displays a message indicating that the SRLB is used, run the display segment-routing local-block command to view the SRLB ranges that can be set. Alternatively, delete unwanted configurations related to the used label to release the label space.

      When the ignore-conflict parameter is specified in the local-block command, if a label conflict occurs, the configuration is forcibly delivered but does not occupy label resources. This function is mainly used for pre-deployment, and the configuration takes effect after the device is restarted.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  3. Configure IGP SR.
    • If the IGP is IS-IS, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      In scenarios where SR-MPLS TE Policies need to be manually configured, you are advised to use manually configured adjacency SIDs. In scenarios where a controller dynamically delivers SR-MPLS TE Policies, you are advised to preferentially use manually configured adjacency SIDs. However, you can also use IS-IS to dynamically generate adjacency SIDs.

    • If the IGP is OSPF, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      Because OSPF cannot advertise manually configured adjacency SIDs, only dynamically generated adjacency SIDs can be used.

Configuring IGP SR-MPLS TE and Topology Reporting

Before an SR-MPLS TE tunnel is established, you need to enable IGP SR-MPLS TE and topology reporting through BGP-LS.

Context

An IGP collects network topology information including the link cost, latency, and packet loss rate and advertises the information to BGP-LS, which then reports the information to a controller. The controller can compute an SR-MPLS TE tunnel based on link cost, latency, packet loss rate, and other factors to meet various service requirements.

This section mainly involves the following operations:

  1. Configure IGP SR-MPLS TE.
  2. Configure IGP topology information to be sent to BGP-LS.
  3. Configure a BGP-LS peer relationship between the forwarder and controller so that the forwarder can report topology information to the controller through BGP-LS.

(Optional) Configuring ORF for the BGP IPv4 SR-Policy Address Family

When a device advertises BGP IPv4 SR-Policy address family routes to peers, you can configure the outbound route filtering (ORF) function to enable the device to advertise only routes carrying the BGP router IDs of peers. This function helps reduce the network load.

Usage Scenario

If ORF is configured for two devices between which a BGP-VPN-Target peer relationship has been established, the devices can advertise ORF routes carrying local BGP router IDs to each other in order to notify each other of their required routes. For example, in a scenario with one controller, one route reflector (RR), and multiple forwarders deployed, you can configure ORF on the RR and forwarders. This way, each of the forwarders can notify the RR of the required BGP SR-Policy routes. When the controller advertises the BGP SR-Policy routes required by all the forwarders to the RR, the RR can apply the ORF function to advertise only the target routes to the peer forwarder. This means that the peer forwarder receives only the required routes, reducing the route receiving pressure and network load.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bgp { as-number-plain | as-number-dot }

    The BGP view is displayed.

  3. Run peer ipv4-address as-number { as-number-plain | as-number-dot }

    A BGP peer is created.

  4. Run ipv4-family vpn-target

    The BGP-VPN-Target address family view is displayed.

  5. Run peer ipv4-address enable

    The BGP peer relationship in the local address family is enabled.

  6. (Optional) Run peer ipv4-address reflect-client

    The device is enabled to function as an RR, and the specified peer is configured as an RR client.

    On a network where an RR is deployed, you also need to perform this step in the BGP-VPN-Target address family view of the RR to enable its reflector function.

  7. Run quit

    Return to the BGP view.

  8. Run ipv4-family sr-policy

    The BGP IPv4 SR-Policy address family view is displayed.

  9. Run peer ipv4-address enable

    The BGP peer relationship in the local address family is enabled.

  10. Run route-target-orf enable

    ORF is enabled globally.

    This step globally enables ORF for all BGP IPv4 SR-Policy peers. If some of the peers do not need ORF, run the peer ipv4-address route-target-orf disable command to disable this function.

  11. Run commit

    The configuration is committed.

Configuring a BGP IPv4 SR-MPLS TE Policy Peer Relationship Between a Controller and a Forwarder

This section describes how to configure a BGP IPv4 SR-MPLS TE Policy peer relationship between a controller and a forwarder, so that the controller can deliver SR-MPLS TE Policies to the forwarder. This improves SR-MPLS TE Policy deployment efficiency.

Context

The process for a controller to dynamically generate and deliver an SR-MPLS TE Policy to a forwarder is as follows:

  1. The controller collects information, such as network topology and label information, through BGP-LS.

  2. The controller and headend forwarder establish a BGP peer relationship of the IPv4 SR-MPLS TE Policy address family.

  3. The controller computes an SR-MPLS TE Policy and delivers it to the headend forwarder through the BGP peer relationship. The headend forwarder then generates SR-MPLS TE Policy entries.

To implement the preceding operations, you need to establish a BGP-LS peer relationship and a BGP IPv4 SR-MPLS TE Policy peer relationship between the controller and the specified forwarder.

Procedure

  • Configure a BGP IPv4 SR-MPLS TE Policy peer relationship.

    This example provides the procedure for configuring a BGP IPv4 SR-MPLS TE Policy peer relationship on the forwarder. The procedure on the controller is similar to that on the forwarder.

    1. Run system-view

      The system view is displayed.

    2. Run bgp { as-number-plain | as-number-dot }

      BGP is enabled, and the BGP view is displayed.

    3. Run peer ipv4-address as-number { as-number-plain | as-number-dot }

      A BGP peer is created.

    4. Run ipv4-family sr-policy

      The BGP IPv4 SR-MPLS TE Policy address family view is displayed.

    5. Run peer ipv4-address enable

      The device is enabled to exchange routes with the specified BGP IPv4 SR-MPLS TE Policy peer.

    6. Run commit

      The configuration is committed.

  • (Optional) Enable the SR-MPLS TE policy to report BGP-LS.
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      SR is enabled globally, and the SR view is displayed.

    3. Run sr-te-policy bgp-ls enable

      The SR-MPLS TE policy is enabled to report BGP-LS.

      After the sr-te-policy bgp-ls enable command is run, the system sends the path information to the BGP-LS with the granularity of candidate path of the SR-MPLS TE policy.

(Optional) Configuring Cost Values for SR-MPLS TE Policies

Configure cost values for SR-MPLS TE Policies so that the ingress can select the optimal SR-MPLS TE Policy based on the values.

Context

By default, the cost (or metric) values of SR-MPLS TE Policies are irrelevant to IGP cost values — the cost values of the IGP routes on which the SR-MPLS TE Policies depend. The default cost value of an SR-MPLS TE Policy is 0. As a result, the ingress cannot perform cost-based SR-MPLS TE Policy selection. To avoid this problem, you can either configure SR-MPLS TE Policies to inherit IGP cost values or directly configure absolute cost values for the SR-MPLS TE Policies.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally, and the SR view is displayed.

  3. Run sr-te policy policy-name [ endpoint ipv4-address color color-value ]

    An SR-MPLS TE Policy with the specified endpoint and color is created, and the SR-MPLS TE Policy view is displayed.

  4. Run metric inherit-igp

    The SR-MPLS TE Policy is configured to inherit the IGP cost.

  5. Run igp metric absolute absoluteValue

    An absolute cost is configured for the SR-MPLS TE Policy.

    When an SR-MPLS TE Policy is configured to inherit the IGP cost, the cost of the SR-MPLS TE Policy is reported to the tunnel management module according to the following rules:

    • If the igp metric absolute command is run to configure an absolute IGP cost, the configured cost is reported as the cost of the SR-MPLS TE Policy.
    • If the igp metric absolute command is not run, the cost of the route that has a 32-bit mask and is destined for the endpoint of the SR-MPLS TE Policy is subscribed to and reported as the cost of the SR-MPLS TE Policy. In this case, the route cost may be different from the actual path cost of the SR-MPLS TE Policy.
    • If the metric inherit-igp command is not run, the cost of the SR-MPLS TE Policy reported to the tunnel management module is 0.

  6. Run commit

    The configuration is committed.

Configuring a BGP Extended Community

This section describes how to add a BGP extended community, that is, the Color Extended Community, to a route through a route-policy, enabling the route to be recursed to an SR-MPLS TE Policy based on the color value and next-hop address in the route.

Context

The route coloring process is as follows:

  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Procedure

  1. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause as a route-policy filter criterion. You can add or modify the Color Extended Community only for a route-policy that meets the filter criterion.

      For details about the configuration, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The Color extended community is configured.

    5. Run commit

      The configuration is committed.

  2. Apply the route-policy.
    • Apply the route-policy to a BGP IPv4 unicast peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family unicast

        The IPv4 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP4+ 6PE peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP4+ 6PE peer is created.

      4. Run ipv6-family unicast

        The IPv6 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP4+ 6PE import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP EVPN import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

Configuring Traffic Steering

This section describes how to configure traffic steering to recurse a route to an SR-MPLS TE Policy so that traffic can be forwarded through the path specified by the SR-MPLS TE Policy.

Usage Scenario

After an SR-MPLS TE Policy is configured, traffic needs to be steered into the policy for forwarding. This process is called traffic steering. Currently, SR-MPLS TE Policies can be used for various routes and services, such as BGP and static routes as well as BGP4+ 6PE, BGP L3VPN and EVPN services. This section describes how to use tunnel policies to recurse services to SR-MPLS TE Policies.

EVPN VPWS and EVPN VPLS packets do not support DSCP-based traffic steering because they do not carry DSCP values.

Pre-configuration Tasks

Before configuring traffic steering, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, or EVPN services correctly.

  • Configure an IP prefix list and a tunnel policy if you want to restrict the route to be recursed to the specified SR-MPLS TE Policy.

Procedure

  1. Configure a tunnel policy.

    Use either of the following procedures based on the traffic steering mode you select:

    • Color-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      4. Run tunnel select-seq sr-te-policy load-balance-number load-balance-number unmix

        The tunnel selection sequence and number of tunnels for load balancing are configured.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Exit the tunnel policy view.

    • DSCP-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run segment-routing

        The Segment Routing view is displayed.

      3. Run sr-te-policy group group-value

        An SR-MPLS TE Policy group is created and the SR-MPLS TE Policy group view is displayed.

      4. Run endpoint ipv4-address

        An endpoint is configured for the SR-MPLS TE Policy group.

      5. Run color color-value match dscp { ipv4 | ipv6 } { { dscp-value1 [ to dscp-value2 ] } &<1-64> | default }

        The mapping between the color values of SR-MPLS TE Policies in an SR-MPLS TE Policy group and the DSCP values of packets is configured.

        Each SR-MPLS TE Policy in an SR-MPLS TE Policy group has its own color attribute. You can run the color match dscp command to configure the mapping between color and DSCP values, thereby associating DSCP values, color values, and SR-MPLS TE Policies in an SR-MPLS TE Policy group. IP packets can then be steered into the specified SR-MPLS TE Policy based on their DSCP values.

        When using the color match dscp command, pay attention to the following points:
        1. You can configure a separate color-DSCP mapping for both the IPv4 address family and IPv6 address family. In the same address family (IPv4/IPv6), each DSCP value can be associated with only one SR-MPLS TE Policy. Furthermore, the association can be performed for an SR-MPLS TE Policy only if this policy is up.

        2. The color color-value match dscp { ipv4 | ipv6 } default command can be used to specify a default SR-MPLS TE Policy in an address family (IPv4/IPv6). If a DSCP value is not associated with any SR-MPLS TE Policy in an SR-MPLS TE Policy group, the packets carrying this DSCP value are forwarded over the default SR-MPLS TE Policy. Each address family (IPv4/IPv6) in an SR-MPLS TE Policy group can have only one default SR-MPLS TE Policy.

        3. In scenarios where no default SR-MPLS TE Policy is specified for an address family (IPv4/IPv6) in an SR-MPLS TE Policy group:
          1. If the mapping between color and DSCP values is configured for the group but only some of the DSCP values are associated with SR-MPLS TE Policies, the packets carrying DSCP values that are not associated with SR-MPLS TE Policies are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the address family.
          2. If no DSCP value is associated with an SR-MPLS TE Policy in the group (for example, the mapping between color and DSCP values is not configured, or DSCP values are not successfully associated with SR-MPLS TE Policies after the mapping is configured), the default SR-MPLS TE Policy in the other address family (IPv4/IPv6) is used to forward packets. If no default SR-MPLS TE Policy is specified for the other address family, packets are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the local address family.
      6. Run quit

        Return to the SR view.

      7. Run quit

        Return to the system view.

      8. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      9. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      10. Run tunnel binding destination dest-ip-address sr-te-policy group sr-te-policy-group-id [ ignore-destination-check ] [ down-switch ]

        A tunnel binding policy is configured to bind the destination IP address and SR-MPLS TE Policy group.

        The ignore-destination-check keyword is used to disable the function to check the consistency between the destination IP address specified using destination dest-ip-address and the endpoint of the corresponding SR-MPLS TE Policy.

      11. Run commit

        The configuration is committed.

      12. Run quit

        Exit the tunnel policy view.

  2. Configure a route or service to be recursed to an SR-MPLS TE Policy.
    • Configure a non-labeled public BGP route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a non-labeled public BGP route, see Configuring Basic BGP Functions.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a non-labeled public network route to an SR-MPLS TE Policy is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a static route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a static route, see Configuring IPv4 Static Routes.

      The color attribute cannot be added to static routes. Therefore, static routes support only DSCP-based traffic steering to SR-MPLS TE Policies, not color-based traffic steering to SR-MPLS TE Policies.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a static route to an SR-MPLS TE Policy for MPLS forwarding is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a BGP L3VPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPN service, see Configuring a Basic BGP/MPLS IP VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv4 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPN service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP L3VPNv6 service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPNv6 service, see Configuring a Basic BGP/MPLS IPv6 VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv6 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPNv6 service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP4+ 6PE service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        A 6PE peer is enabled.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

    • Configure an EVPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure an EVPN service, see Configuring EVPN VPLS over MPLS (BD EVPN Instance).

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A tunnel policy is applied to the EVPN L3VPN instance.

      4. (Optional) Run default-color color-value evpn

        The default color value is specified for the EVPN L3VPN service to recurse to an SR-MPLS TE Policy.

        If a remote EVPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the BD EVPN instance.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of the EVPN instance that works in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the EVPN instance that works in EVPN VPWS mode.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Verifying the SR-MPLS TE Policy Configuration

After configuring SR-MPLS TE Policies, verify the configuration.

Prerequisites

SR-MPLS TE Policies have been configured.

Procedure

  1. Run the display sr-te policy [ endpoint ipv4-address color color-value | policy-name name-value ] command to check SR-MPLS TE Policy details.
  2. Run the display sr-te policy statistics command to check SR-MPLS TE Policy statistics.
  3. Run the display sr-te policy status { endpoint ipv4-address color color-value | policy-name name-value } command to check SR-MPLS TE Policy status to determine the reason why a specified SR-MPLS TE Policy cannot go up.
  4. Run the display sr-te policy last-down-reason [ endpoint ipv4-address color color-value | policy-name name-value ] command to check records about events where SR-MPLS TE Policies or segment lists in SR-MPLS TE Policies go down.
  5. Run the display ip vpn-instance vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about the iterations of the routes that match the nexthop under each address family in the current VPN instance.
  6. Run the display evpn vpn-instance [ name vpn-instance-name ] tunnel-info command to displays information about the tunnel associated with the EVPN instance.
  7. Run the display evpn vpn-instance name vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about a tunnel associated with an EVPN with a specified next-hop address.

Configuring an SR-MPLS TE Policy (in ODN Mode)

SR-MPLS TE Policy is a tunneling technology developed based on SR.

Usage Scenario

SR-MPLS TE Policies can either be manually configured on forwarders or be dynamically generated by a controller and then delivered from the controller to forwarders through a protocol such as BGP. In addition, they can be generated in on-demand next hop (ODN) mode.

The process of generating an SR-MPLS TE Policy in ODN mode is as follows:

  1. The controller collects information, such as network topology and label information, through BGP-LS.

  2. Color extended community attribute information is added to routes to specify the SLA requirements of each route, such as requiring a low-latency or high-bandwidth path.
  3. An ODN template is configured for the headend forwarder to match the color extended community attributes of BGP routes against the color value in the ODN template. After the matching succeeds, the headend forwarder (PCC) sends a path computation request to the controller (PCE).
  4. The controller computes an SR-MPLS TE Policy path and delivers it to the headend forwarder through PCEP. The headend forwarder then generates an SR-MPLS TE Policy entry.

Pre-configuration Tasks

Before configuring an SR-MPLS TE Policy, complete the following tasks:

  • Configure an IGP to implement network layer connectivity.

  • Configure a routing protocol between the controller and forwarders to ensure that they can communicate.

Configuring TE Attributes

Configure TE attributes for links so that SR-MPLS TE paths can be adjusted based on the TE attributes during path computation.

Context

TE link attributes are as follows:

  • Link bandwidth

    This attribute must be configured if you want to limit the bandwidth of an SR-MPLS TE tunnel link.

  • Dynamic link bandwidth

    Dynamic bandwidth can be configured if you want TE to be aware of physical bandwidth changes on interfaces.

  • TE metric of a link

    Either the IGP metric or TE metric of a link can be used during SR-MPLS TE path computation. If the TE metric is used, SR-MPLS TE path computation is more independent of IGP, implementing flexible tunnel path control.

  • Administrative group and affinity attribute

    An SR-MPLS TE tunnel's affinity attribute determines its link attribute. The affinity attribute and link administrative group are used together to determine the links that can be used by the SR-MPLS TE tunnel.

  • SRLG

    A shared risk link group (SRLG) is a group of links that share a public physical resource, such as an optical fiber. Links in an SRLG are at the same risk of faults. If one of the links fails, other links in the SRLG also fail.

    An SRLG enhances SR-MPLS TE reliability on a network with CR-LSP hot standby or TE FRR enabled. Links that share the same physical resource have the same risk. For example, links on an interface and its sub-interfaces are in an SRLG. The interface and its sub-interfaces have the same risk. If the interface goes down, its sub-interfaces will also go down. Similarly, if the link of the primary path of an SR-MPLS TE tunnel and the links of the backup paths of the SR-MPLS TE tunnel are in an SRLG, the backup paths will most likely go down when the primary path goes down.

Procedure

  • (Optional) Configure link bandwidth.

    Link bandwidth needs to be configured only on outbound interfaces of SR-MPLS TE tunnel links that have bandwidth requirements.

    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth max-bw-value

      The maximum reservable link bandwidth is configured.

    4. Run mpls te bandwidth bc0 bc0-bw-value

      The BC0 bandwidth is configured.

      • The maximum reservable link bandwidth cannot be higher than the physical link bandwidth. You are advised to set the maximum reservable link bandwidth to be less than or equal to 80% of the physical link bandwidth.

      • The BC0 bandwidth cannot be higher than the maximum reservable link bandwidth.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure dynamic link bandwidth.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value

      The maximum reservable dynamic link bandwidth is configured.

      If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the later configuration overrides the previous one.

    4. (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth

      The device is configured to use the remaining bandwidth of the interface when calculating the maximum dynamic reservable bandwidth for TE.

      In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases; however, the dynamic maximum reservable bandwidth of the TE tunnel is still calculated based on the physical bandwidth. As a result, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.

    5. Run mpls te bandwidth dynamic bc0 bc0-bw-percentage

      The BC0 dynamic bandwidth is configured for the link.

      If this command is run in the same interface view as the mpls te bandwidth bc0 command, the later configuration overrides the previous one.

    6. Run commit

      The configuration is committed.

  • (Optional) Configure a TE metric for a link.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te metric metric-value

      A TE metric is configured for the link.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute in hexadecimal format.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te link administrative group group-value

      A TE link administrative group is configured.

    4. Run commit

      The configuration is committed.

  • (Optional) Configure the administrative group and affinity attribute based on the affinity and administrative group names.
    1. Run system-view

      The system view is displayed.

    2. Run path-constraint affinity-mapping

      An affinity name template is configured, and the template view is displayed.

      This template must be configured on each node involved in SR-MPLS TE path computation, and the global mappings between the names and values of affinity bits must be the same on all the involved nodes.

    3. Run attribute bit-name bit-sequence bit-number

      Mappings between affinity bit values and names are configured.

      This step configures only one bit of an affinity attribute, which has a total of 32 bits. Repeat this step as needed to configure some or all of the bits.

    4. Run quit

      Return to the system view.

    5. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    6. Run mpls te link administrative group name { name-string } &<1-32>

      A link administrative group is configured.

      The name-string value must be in the range specified for the affinity attribute in the template.

    7. Run commit

      The configuration is committed.

  • (Optional) Configure an SRLG.
    1. Run system-view

      The system view is displayed.

    2. Run interface interface-type interface-number

      The interface view of a TE link is displayed.

    3. Run mpls te srlg srlg-number

      The interface is added to an SRLG.

      In a hot-standby or TE FRR scenario, you need to configure SRLG attributes for the SR-MPLS TE outbound interface of the ingress and other member links in the SRLG to which the interface belongs. A link joins an SRLG only after SRLG attributes are configured for any outbound interface of the link.

      To delete the SRLG attribute configurations of all interfaces on the local node, run the undo mpls te srlg all-config command.

    4. Run commit

      The configuration is committed.

Configuring IGP SR

This section describes how to configure IGP SR.

Context

An SR-MPLS TE Policy may contain multiple candidate paths that are defined using segment lists. Because adjacency and node SIDs are required for SR-MPLS TE Policy configuration, you need to configure the IGP SR function. Node SIDs need to be configured manually, whereas adjacency SIDs can either be generated dynamically by an IGP or be configured manually.
  • In scenarios where SR-MPLS TE Policies are configured manually, if dynamic adjacency SIDs are used, the adjacency SIDs may change after an IGP restart. To keep the SR-MPLS TE Policies up, you must adjust them manually. The adjustment workload is heavy if a large number of SR-MPLS TE Policies are configured manually. Therefore, you are advised to configure adjacency SIDs manually and not to use adjacency SIDs generated dynamically.
  • In scenarios where SR-MPLS TE Policies are delivered by a controller dynamically, you are also advised to configure adjacency SIDs manually. Although the controller can use BGP-LS to detect adjacency SID changes, the adjacency SIDs dynamically generated by an IGP change randomly, causing inconvenience in routine maintenance and troubleshooting.

Procedure

  1. Enable MPLS.
    1. Run system-view

      The system view is displayed.

    2. Run mpls lsr-id lsr-id

      An LSR ID is configured for the device.

      Note the following during LSR ID configuration:
      • Configuring LSR IDs is the prerequisite for all MPLS configurations.

      • LSRs do not have default IDs. LSR IDs must be manually configured.

      • Using a loopback interface address as the LSR ID is recommended for an LSR.

    3. (Optional) Run mpls

      MPLS is enabled, and the MPLS view is displayed.

      SR-MPLS uses the MPLS forwarding plane, and therefore requires MPLS to be enabled. However, the MPLS capability is automatically enabled on an interface when any of the following conditions is met, without the need to perform this step:

      • MPLS is enabled on the interface.
      • Segment Routing is enabled for IGP, and IGP is enabled on the interface.
      • A static adjacency label is configured for the interface in the Segment Routing view.

    4. Run commit

      The configuration is committed.

    5. Run quit

      Return to the system view.

  2. Enable SR globally.
    1. Run segment-routing

      Segment Routing is enabled globally, and the Segment Routing view is displayed.

    2. (Optional) Run local-block begin-value end-value [ ignore-conflict ]

      An SR-MPLS-specific SRLB range is configured.

      If the system displays a message indicating that the SRLB is used, run the display segment-routing local-block command to view the SRLB ranges that can be set. Alternatively, delete unwanted configurations related to the used label to release the label space.

      When the ignore-conflict parameter is specified in the local-block command, if a label conflict occurs, the configuration is forcibly delivered but does not occupy label resources. This function is mainly used for pre-deployment, and the configuration takes effect after the device is restarted.

    3. Run commit

      The configuration is committed.

    4. Run quit

      Return to the system view.

  3. Configure IGP SR.
    • If the IGP is IS-IS, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      In scenarios where SR-MPLS TE Policies need to be manually configured, you are advised to use manually configured adjacency SIDs. In scenarios where a controller dynamically delivers SR-MPLS TE Policies, you are advised to preferentially use manually configured adjacency SIDs. However, you can also use IS-IS to dynamically generate adjacency SIDs.

    • If the IGP is OSPF, configure IGP SR by referring to Configuring Basic SR-MPLS TE Functions.

      Because OSPF cannot advertise manually configured adjacency SIDs, only dynamically generated adjacency SIDs can be used.

Configuring IGP SR-MPLS TE and Topology Reporting

Before an SR-MPLS TE tunnel is established, you need to enable IGP SR-MPLS TE and topology reporting through BGP-LS.

Context

An IGP collects network topology information including the link cost, latency, and packet loss rate and advertises the information to BGP-LS, which then reports the information to a controller. The controller can compute an SR-MPLS TE tunnel based on link cost, latency, packet loss rate, and other factors to meet various service requirements.

This section mainly involves the following operations:

  1. Configure IGP SR-MPLS TE.
  2. Configure IGP topology information to be sent to BGP-LS.
  3. Configure a BGP-LS peer relationship between the forwarder and controller so that the forwarder can report topology information to the controller through BGP-LS.

Configuring a BGP Extended Community

This section describes how to add a BGP extended community, that is, the Color Extended Community, to a route through a route-policy, enabling the route to be recursed to an SR-MPLS TE Policy based on the color value and next-hop address in the route.

Context

The route coloring process is as follows:

  1. Configure a route-policy and set a specific color value for the desired route.

  2. Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.

Procedure

  1. Configure a route-policy.
    1. Run system-view

      The system view is displayed.

    2. Run route-policy route-policy-name { deny | permit } node node

      A route-policy with a specified node is created, and the route-policy view is displayed.

    3. (Optional) Configure an if-match clause as a route-policy filter criterion. You can add or modify the Color Extended Community only for a route-policy that meets the filter criterion.

      For details about the configuration, see (Optional) Configuring an if-match Clause.

    4. Run apply extcommunity color color

      The Color extended community is configured.

    5. Run commit

      The configuration is committed.

  2. Apply the route-policy.
    • Apply the route-policy to a BGP IPv4 unicast peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family unicast

        The IPv4 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP4+ 6PE peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP4+ 6PE peer is created.

      4. Run ipv6-family unicast

        The IPv6 unicast address family view is displayed.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP4+ 6PE import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv4 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv4-family vpnv4

        The BGP VPNv4 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv4 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP VPNv6 peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run peer { ipv4-address | group-name } as-number { as-number-plain | as-number-dot }

        A BGP peer is created.

      4. Run ipv6-family vpnv6

        The BGP VPNv6 address family view is displayed.

      5. Run peer { ipv4-address | group-name } enable

        The BGP VPNv6 peer relationship is enabled.

      6. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP import or export route-policy is configured.

      7. Run commit

        The configuration is committed.

    • Apply the route-policy to a BGP EVPN peer.
      1. Run system-view

        The system view is displayed.

      2. Run bgp as-number

        The BGP view is displayed.

      3. Run l2vpn-family evpn

        The BGP EVPN address family view is displayed.

      4. Run peer { ipv4-address | group-name } enable

        The BGP EVPN peer relationship is enabled.

      5. Run peer { ipv4-address | group-name } route-policy route-policy-name { import | export }

        A BGP EVPN import or export route-policy is configured.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv4 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv4 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv4 address family.

      6. Run commit

        The configuration is committed.

    • Apply the route-policy to a VPN instance IPv6 address family.
      1. Run system-view

        The system view is displayed.

      2. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      3. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      4. Run import route-policy policy-name

        An import route-policy is configured for the VPN instance IPv6 address family.

      5. Run export route-policy policy-name

        An export route-policy is configured for the VPN instance IPv6 address family.

      6. Run commit

        The configuration is committed.

Configuring an ODN Template and PCEP Path Computation Constraints

The on-demand next hop (ODN) function does not require a large number of SR-MPLS TE Policies to be configured in advance. Instead, it enables SR-MPLS TE Policy creation to be dynamically triggered on demand based on service routes, simplifying network operations.

Context

During SR-MPLS TE Policy creation, you can select a pre-configured attribute template and constraint template to ensure that the to-be-created SR-MPLS TE Policy meets service requirements. During SR-MPLS TE Policy constraint configuration, you can apply a global constraint template or apply a specific constraint template to a candidate path of the SR-MPLS TE Policy. If both templates are applied, the template applied to the candidate path of the SR-MPLS TE Policy takes precedence.

Procedure

  1. Configure an SR-MPLS TE Policy attribute template.
    1. Run the system-view command to enter the system view.
    2. Run the segment-routing policy-template template-value command to create an SR Policy attribute template and enter the SR Policy template view.
    3. (Optional) Run the description description-value command to configure a description for the attribute template.
    4. Run the bfd seamless enable command to enable the BFD function in the attribute template.
    5. Run the bfd bypass command to enable the BFD bypass function in the attribute template.
    6. Run the backup hot-standby { enable | disable } command to enable or disable the hot-standby function in the attribute template.
    7. Run the traffic-statistics { enable | disable } command to enable or disable the traffic statistics collection function in the attribute template.
    8. Run the quit command to return to the system view.
    9. Run the commit command to commit the configuration.
  2. Configure an SR-MPLS TE Policy constraint template.

    1. Run the system-view command to enter the system view.
    2. Run the segment-routing policy constraint-template constraintName command to create an SR-MPLS TE Policy constraint template and enter the constraint template view.
    3. Run the priority setup-priority hold-priority command to configure the path setup priority and hold priority in the constraint template.
    4. Run the affinity { include-all | include-any | exclude } { affinity-name } &<1-32> command to configure affinity constraints in the constraint template.
    5. Run the bandwidth ct0 bandwidth-value command to configure a bandwidth constraint in the constraint template.
    6. Run the constraint-path constraint-path-name command to configure a path constraint in the constraint template.
      The path constraint specified using the constraint-path-name parameter is the one configured using the segment-routing policy constraint-path path-name command. As such, the value of this parameter is the same as that of the path-name parameter. To configure the constraint-path-name parameter, perform the following steps:
      1. Run the segment-routing policy constraint-path path-name command to create an SR-MPLS TE Policy path constraint and enter the path constraint view.
      2. Run the index index-value address ipv4 ipv4-address [ include [ strict | loose ] | exclude ] command to specify the next-hop IP address in the path constraint.
    7. Run the link-bandwidth utilization utilization-value command to configure bandwidth usage in the constraint template.
    8. Run the metric-type { igp | te | delay | hop-count } command to configure a metric type in the constraint template.

      After completing the configuration, run the following commands to configure the corresponding constraint values:

      • Run the max-cumulation igp max-igp-cost command to configure the maximum IGP cost in the constraint template.
      • Run the max-cumulation te max-te-cost command to configure the maximum TE cost in the constraint template.
      • Run the max-cumulation delay max-delay command to configure the maximum delay in the constraint template.
      • Run the max-cumulation hop-count max-hop-count command to configure the maximum number of hops in the constraint template.
    9. (Optional) Run the sid selection { unprotected-preferred | protected-preferred | unprotected-only | protected-only } command to configure a SID selection rule in the constraint template.

      The default SID selection rule in the constraint template is unprotected-SID-preferred (unprotected-preferred).

    10. Run the quit command to return to the system view.
    11. Run the commit command to commit the configuration.

  3. Configure an ODN template.
    1. Run the system-view command to enter the system view.
    2. Run the segment-routing command to enable SR globally and enter the Segment Routing view.
    3. Run the on-demand color colorValue command to create an SR-MPLS TE Policy ODN template and enter the SR-MPLS TE Policy ODN view.
    4. Run the dynamic-computation-seq { pcep } * command to set the dynamic path computation mode to PCEP.
    5. (Optional) Run the restrict ip-prefix prefixName command to specify the IP address prefix list to be referenced by the ODN template.

      When SR-MPLS TE Policies are generated based on an SR-MPLS TE Policy ODN template, an IP address prefix list can be used to filter BGP routes.

    6. (Optional) Run the attribute-template attributeId command to configure the ODN template to reference the SR-MPLS TE Policy attribute template.

      The template specified using the attributeId parameter is the one created using the segment-routing policy-template template-value command. As such, the value of this parameter is the same as that of the template-value parameter.

    7. (Optional) Run the constraint-template constraintName command to configure the ODN template to reference the SR-MPLS TE Policy constraint template.

      The template specified using the constraintName parameter is the one created using the segment-routing policy constraint-template constraintName command.

    8. Run the candidate-path preference pref-value command to configure a candidate path for the SR-MPLS TE Policy and specify a preference value for the path.

      Each SR-MPLS TE Policy supports multiple candidate paths. A larger pref-value indicates a higher candidate path preference. If multiple candidate paths are configured, the one with the highest preference takes effect.

    9. Run the affinity { include-all | include-any | exclude } { affinity-name } &<1-32> command to configure an affinity constraint for the candidate path.
    10. Run the constraint-path constraint-path-name command to configure a path constraint for the candidate path.

      The path constraint specified using the constraint-path-name parameter is the one configured using the segment-routing policy constraint-path path-name command. As such, the value of this parameter is the same as that of the path-name parameter. To configure the constraint-path-name parameter, perform the following steps:
      1. Run the return command to return to the system view.
      2. Run the segment-routing policy constraint-path path-name command to create an SR-MPLS TE Policy path constraint and enter the path constraint view.
      3. Run the index index-value address ipv4 ipv4-address [ include [ strict | loose ] | exclude ] command to specify the next-hop IP address in the path constraint.
      4. Run the quit command to return to the system view.
      5. Run the segment-routing command to enable SR globally and enter the Segment Routing view.
      6. Run the on-demand color colorValue command to create an SR-MPLS TE Policy ODN template and enter the SR-MPLS TE Policy ODN view.

    11. Run the link-bandwidth utilization utilization-value command to configure bandwidth usage for the candidate path.
    12. Run the metric-type { igp | te | delay | hop-count } command to configure a metric type for the candidate path.

      After completing the configuration, run the following commands to configure the corresponding constraint values:

      • Run the max-cumulation igp max-igp-cost command to configure the maximum IGP cost for the candidate path.
      • Run the max-cumulation te max-te-cost command to configure the maximum TE cost for the candidate path.
      • Run the max-cumulation delay max-delay command to configure the maximum delay for the candidate path.
      • Run the max-cumulation hop-count max-hop-count command to configure the maximum number of hops for the candidate path.

    13. (Optional) Run the sid selection { unprotected-preferred | protected-preferred | unprotected-only | protected-only } command to configure a SID selection rule for the candidate path.

      The default SID selection rule in the constraint template is unprotected-SID-preferred (unprotected-preferred).

    14. Run the commit command to commit the configuration.

Configuring PCEP Delegation

This section describes how to establish a PCEP session between the PCC and PCE so that the PCC can send path computation requests to the PCE and receive SR-MPLS TE Policy information from the PCE.

Context

If the PCC has delegated the control permission to the PCE, the PCE recomputes a path when network information (e.g., topology information) or ODN template information changes. The PCE sends a PCUpd message to deliver information about the recomputed path to the PCC and uses the PLSP-ID reported by the PCC as an identifier. After receiving the PCUpd message delivered by the PCE, the PCC (headend) updates the path.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run pce-client

    The device is configured as a PCE client, and the PCE client view is displayed.

  3. Run capability segment-routing

    The PCE client is enabled to process SR-MPLS TE Policies.

  4. Run connect-server ip-address

    A candidate PCE server is configured for the PCE client, and the PCE server connection view is displayed.

    The ip-address parameter specifies the source IP address of the PCE server. You can repeat this step to configure multiple candidate servers for backup purposes.

  5. (Optional) Run preference preference

    A preference is configured for the candidate PCE server.

    The value of preference is an integer ranging from 0 to 7. A larger value indicates a higher preference.

    You can configure multiple candidate PCE servers for a PCE client and specify different preferences for these servers. The candidate PCE server with the highest preference is preferentially selected for path computation.

    If no preference is specified, the default value 0 is used to indicate the lowest preference. If multiple PCE servers have the same preference, the PCE server with the smallest IP address is preferentially selected for path computation.

  6. (Optional) Run source-interface port-type port-num

    A source IP address is configured for the PCEP session.

    By default, the PCEP session is established using the MPLS LSR-ID as the source IP address. In scenarios where the MPLS LSR-ID is unreachable, you can run this command to borrow the IP address of another local interface as the source IP address.

  7. Run commit

    The configuration is committed.

Follow-up Procedure

For more PCEP-related configurations, see Configuring PCEP to Trigger SR-MPLS TE Policy Establishment.

(Optional) Configuring Cost Values for SR-MPLS TE Policies

Configure cost values for SR-MPLS TE Policies so that the ingress can select the optimal SR-MPLS TE Policy based on the values.

Context

By default, the cost (or metric) values of SR-MPLS TE Policies are irrelevant to IGP cost values — the cost values of the IGP routes on which the SR-MPLS TE Policies depend. The default cost value of an SR-MPLS TE Policy is 0. As a result, the ingress cannot perform cost-based SR-MPLS TE Policy selection. To avoid this problem, you can either configure SR-MPLS TE Policies to inherit IGP cost values or directly configure absolute cost values for the SR-MPLS TE Policies.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally, and the SR view is displayed.

  3. Run sr-te policy policy-name [ endpoint ipv4-address color color-value ]

    An SR-MPLS TE Policy with the specified endpoint and color is created, and the SR-MPLS TE Policy view is displayed.

  4. Run metric inherit-igp

    The SR-MPLS TE Policy is configured to inherit the IGP cost.

  5. Run igp metric absolute absoluteValue

    An absolute cost is configured for the SR-MPLS TE Policy.

    When an SR-MPLS TE Policy is configured to inherit the IGP cost, the cost of the SR-MPLS TE Policy is reported to the tunnel management module according to the following rules:

    • If the igp metric absolute command is run to configure an absolute IGP cost, the configured cost is reported as the cost of the SR-MPLS TE Policy.
    • If the igp metric absolute command is not run, the cost of the route that has a 32-bit mask and is destined for the endpoint of the SR-MPLS TE Policy is subscribed to and reported as the cost of the SR-MPLS TE Policy. In this case, the route cost may be different from the actual path cost of the SR-MPLS TE Policy.
    • If the metric inherit-igp command is not run, the cost of the SR-MPLS TE Policy reported to the tunnel management module is 0.

  6. Run commit

    The configuration is committed.

Configuring Traffic Steering

This section describes how to configure traffic steering to recurse a route to an SR-MPLS TE Policy so that traffic can be forwarded through the path specified by the SR-MPLS TE Policy.

Usage Scenario

After an SR-MPLS TE Policy is configured, traffic needs to be steered into the policy for forwarding. This process is called traffic steering. Currently, SR-MPLS TE Policies can be used for various routes and services, such as BGP and static routes as well as BGP4+ 6PE, BGP L3VPN and EVPN services. This section describes how to use tunnel policies to recurse services to SR-MPLS TE Policies.

EVPN VPWS and EVPN VPLS packets do not support DSCP-based traffic steering because they do not carry DSCP values.

Pre-configuration Tasks

Before configuring traffic steering, complete the following tasks:

  • Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services, BGP L3VPNv6 services, or EVPN services correctly.

  • Configure an IP prefix list and a tunnel policy if you want to restrict the route to be recursed to the specified SR-MPLS TE Policy.

Procedure

  1. Configure a tunnel policy.

    Use either of the following procedures based on the traffic steering mode you select:

    • Color-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      3. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      4. Run tunnel select-seq sr-te-policy load-balance-number load-balance-number unmix

        The tunnel selection sequence and number of tunnels for load balancing are configured.

      5. Run commit

        The configuration is committed.

      6. Run quit

        Exit the tunnel policy view.

    • DSCP-based traffic steering

      1. Run system-view

        The system view is displayed.

      2. Run segment-routing

        The Segment Routing view is displayed.

      3. Run sr-te-policy group group-value

        An SR-MPLS TE Policy group is created and the SR-MPLS TE Policy group view is displayed.

      4. Run endpoint ipv4-address

        An endpoint is configured for the SR-MPLS TE Policy group.

      5. Run color color-value match dscp { ipv4 | ipv6 } { { dscp-value1 [ to dscp-value2 ] } &<1-64> | default }

        The mapping between the color values of SR-MPLS TE Policies in an SR-MPLS TE Policy group and the DSCP values of packets is configured.

        Each SR-MPLS TE Policy in an SR-MPLS TE Policy group has its own color attribute. You can run the color match dscp command to configure the mapping between color and DSCP values, thereby associating DSCP values, color values, and SR-MPLS TE Policies in an SR-MPLS TE Policy group. IP packets can then be steered into the specified SR-MPLS TE Policy based on their DSCP values.

        When using the color match dscp command, pay attention to the following points:
        1. You can configure a separate color-DSCP mapping for both the IPv4 address family and IPv6 address family. In the same address family (IPv4/IPv6), each DSCP value can be associated with only one SR-MPLS TE Policy. Furthermore, the association can be performed for an SR-MPLS TE Policy only if this policy is up.

        2. The color color-value match dscp { ipv4 | ipv6 } default command can be used to specify a default SR-MPLS TE Policy in an address family (IPv4/IPv6). If a DSCP value is not associated with any SR-MPLS TE Policy in an SR-MPLS TE Policy group, the packets carrying this DSCP value are forwarded over the default SR-MPLS TE Policy. Each address family (IPv4/IPv6) in an SR-MPLS TE Policy group can have only one default SR-MPLS TE Policy.

        3. In scenarios where no default SR-MPLS TE Policy is specified for an address family (IPv4/IPv6) in an SR-MPLS TE Policy group:
          1. If the mapping between color and DSCP values is configured for the group but only some of the DSCP values are associated with SR-MPLS TE Policies, the packets carrying DSCP values that are not associated with SR-MPLS TE Policies are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the address family.
          2. If no DSCP value is associated with an SR-MPLS TE Policy in the group (for example, the mapping between color and DSCP values is not configured, or DSCP values are not successfully associated with SR-MPLS TE Policies after the mapping is configured), the default SR-MPLS TE Policy in the other address family (IPv4/IPv6) is used to forward packets. If no default SR-MPLS TE Policy is specified for the other address family, packets are forwarded over the SR-MPLS TE Policy associated with the smallest DSCP value in the local address family.
      6. Run quit

        Return to the SR view.

      7. Run quit

        Return to the system view.

      8. Run tunnel-policy policy-name

        A tunnel policy is created and the tunnel policy view is displayed.

      9. (Optional) Run description description-information

        A description is configured for the tunnel policy.

      10. Run tunnel binding destination dest-ip-address sr-te-policy group sr-te-policy-group-id [ ignore-destination-check ] [ down-switch ]

        A tunnel binding policy is configured to bind the destination IP address and SR-MPLS TE Policy group.

        The ignore-destination-check keyword is used to disable the function to check the consistency between the destination IP address specified using destination dest-ip-address and the endpoint of the corresponding SR-MPLS TE Policy.

      11. Run commit

        The configuration is committed.

      12. Run quit

        Exit the tunnel policy view.

  2. Configure a route or service to be recursed to an SR-MPLS TE Policy.
    • Configure a non-labeled public BGP route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a non-labeled public BGP route, see Configuring Basic BGP Functions.

      1. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a non-labeled public network route to an SR-MPLS TE Policy is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a static route to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a static route, see Configuring IPv4 Static Routes.

      The color attribute cannot be added to static routes. Therefore, static routes support only DSCP-based traffic steering to SR-MPLS TE Policies, not color-based traffic steering to SR-MPLS TE Policies.

      1. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ] [ tunnel-policy policy-name ]

        The function to recurse a static route to an SR-MPLS TE Policy for MPLS forwarding is enabled.

      2. Run commit

        The configuration is committed.

    • Configure a BGP L3VPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPN service, see Configuring a Basic BGP/MPLS IP VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv4 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPN service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP L3VPNv6 service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP L3VPNv6 service, see Configuring a Basic BGP/MPLS IPv6 VPN.

      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

      3. Run tnl-policy policy-name

        A tunnel policy is applied to the VPN instance IPv6 address family.

      4. (Optional) Run default-color color-value

        The default color value is specified for the L3VPNv6 service to recurse to an SR-MPLS TE Policy. If a remote VPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

    • Configure a BGP4+ 6PE service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure a BGP4+ 6PE service, see Configuring BGP4+ 6PE.

      1. Run bgp { as-number-plain | as-number-dot }

        The BGP view is displayed.

      2. Run ipv6-family unicast

        The BGP IPv6 unicast address family view is displayed.

      3. Run peer ipv4-address enable

        A 6PE peer is enabled.

      4. Run peer ipv4-address tnl-policy tnl-policy-name

        A tunnel policy is applied to the 6PE peer.

      5. Run commit

        The configuration is committed.

    • Configure an EVPN service to be recursed to an SR-MPLS TE Policy.

      For details about how to configure an EVPN service, see Configuring EVPN VPLS over MPLS (BD EVPN Instance).

      To apply a tunnel policy to an EVPN L3VPN instance, perform the following steps:
      1. Run ip vpn-instance vpn-instance-name

        The VPN instance view is displayed.

      2. Run ipv4-family or ipv6-family

        The VPN instance IPv4/IPv6 address family view is displayed.

      3. Run tnl-policy policy-name evpn

        A tunnel policy is applied to the EVPN L3VPN instance.

      4. (Optional) Run default-color color-value evpn

        The default color value is specified for the EVPN L3VPN service to recurse to an SR-MPLS TE Policy.

        If a remote EVPN route without carrying the color extended community is leaked to a local VPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      5. Run commit

        The configuration is committed.

      To apply a tunnel policy to a BD EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name bd-mode

        The BD EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the BD EVPN instance.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name vpws

        The view of the EVPN instance that works in EVPN VPWS mode is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the EVPN instance that works in EVPN VPWS mode.

      3. (Optional) Run default-color color-value

        The default color value is specified for the EVPN service to recurse to an SR-MPLS TE Policy. If a remote EVPN route without carrying the color extended community is leaked to a local EVPN instance, the default color value is used for the recursion to an SR-MPLS TE Policy.

      4. Run commit

        The configuration is committed.

      To apply a tunnel policy to a basic EVPN instance, perform the following steps:
      1. Run evpn vpn-instance vpn-instance-name

        The EVPN instance view is displayed.

      2. Run tnl-policy policy-name

        A tunnel policy is applied to the basic EVPN instance.

      3. Run commit

        The configuration is committed.

Verifying the SR-MPLS TE Policy Configuration

After configuring SR-MPLS TE Policies, verify the configuration.

Prerequisites

SR-MPLS TE Policies have been configured.

Procedure

  1. Run the display sr-te policy [ endpoint ipv4-address color color-value | policy-name name-value ] command to check SR-MPLS TE Policy details.
  2. Run the display sr-te policy statistics command to check SR-MPLS TE Policy statistics.
  3. Run the display sr-te policy status { endpoint ipv4-address color color-value | policy-name name-value } command to check SR-MPLS TE Policy status to determine the reason why a specified SR-MPLS TE Policy cannot go up.
  4. Run the display sr-te policy last-down-reason [ endpoint ipv4-address color color-value | policy-name name-value ] command to check records about events where SR-MPLS TE Policies or segment lists in SR-MPLS TE Policies go down.
  5. Run the display ip vpn-instance vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about the iterations of the routes that match the nexthop under each address family in the current VPN instance.
  6. Run the display evpn vpn-instance [ name vpn-instance-name ] tunnel-info command to displays information about the tunnel associated with the EVPN instance.
  7. Run the display evpn vpn-instance name vpn-instance-name tunnel-info nexthop nexthopIpv4Addr command to display information about a tunnel associated with an EVPN with a specified next-hop address.

Configuring SBFD for SR-MPLS TE Policy

This section describes how to configure seamless bidirectional forwarding detection (SBFD) for SR-MPLS TE Policy.

Usage Scenario

SBFD for SR-MPLS TE Policy can quickly detect segment list faults. If all the segment lists of a candidate path are faulty, SBFD triggers a hot-standby candidate path switchover to minimize the impact on services.

Pre-configuration Tasks

Before configuring SBFD for SR-MPLS TE Policy, complete the following tasks:

  • Configure an SR-MPLS TE Policy.

  • Run the mpls lsr-id lsr-id command to configure an LSR ID and ensure that the route from the peer to the local address specified using lsr-id is reachable.

Procedure

  • Configure an SBFD initiator.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      BFD can be configured only after this function is enabled globally using the bfd command.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. (Optional) Run destination ipv4 ip-address remote-discriminator discriminator-value

      The mapping between the SBFD reflector IP address and discriminator is configured.

      On the device functioning as an SBFD initiator, if the mapping between the SBFD reflector IP address and discriminator is configured using the destination ipv4 remote-discriminator command, the initiator uses the configured discriminator to negotiate with the reflector in order to establish an SBFD session. If such a mapping is not configured, the SBFD initiator uses the reflector IP address as a discriminator by default to complete the negotiation.

      This step is optional. If it is performed, the value of discriminator-value must be the same as that of unsigned-integer-value in the reflector discriminator command configured on the reflector.

    6. Run quit

      Return to the system view.

    7. Run segment-routing

      SR is enabled globally and the Segment Routing view is displayed.

    8. Configure SBFD for SR-MPLS TE Policy.

      SBFD for SR-MPLS TE Policy supports both global configuration and single-policy configuration. You can select either of the two modes.

      • Global configuration

        1. Run sr-te-policy seamless-bfd enable

          SBFD is enabled for all SR-MPLS TE Policies.

        2. (Optional) Run sr-te-policy seamless-bfd { min-rx-interval receive-interval | min-tx-interval transmit-interval | detect-multiplier multiplier-value } *

          SBFD parameters are set for the SR-MPLS TE Policies.

          In the SBFD scenario, the min-rx-interval receive-interval parameter will not take effect.

      • Single-policy configuration

        1. Run sr-te policy policy-name

          The SR-MPLS TE Policy view is displayed.

        2. Run seamless-bfd { enable | disable }

          SBFD is enabled or disabled for the SR-MPLS TE Policy.

        3. Run quit

          Return to the SR view.

      If an SR-MPLS TE Policy has both global configuration and single-policy configuration, the single-policy configuration takes effect.

    9. (Optional) Run sr-te-policy delete-delay delete-delay-value
    10. Run commit

      The configuration is committed.

  • Configure an SBFD reflector.
    1. Run system-view

      The system view is displayed.

    2. Run bfd

      BFD is enabled globally.

      BFD can be configured only after this function is enabled globally using the bfd command.

    3. Run quit

      Return to the system view.

    4. Run sbfd

      SBFD is enabled globally, and the SBFD view is displayed.

    5. Run reflector discriminator { unsigned-integer-value | ip-address-value }

      A discriminator is configured for the SBFD reflector.

    6. Run commit

      The configuration is committed.

Verifying the Configuration

After SBFD for SR-MPLS TE Policy is configured, run the display sr-te policy [ endpoint ipv4-address color color-value | policy-name name-value ] command to check SBFD status.

Configuring Hot Standby for SR-MPLS TE Policy

This section describes how to configure hot standby (HSB) for SR-MPLS TE Policy.

Usage Scenario

SBFD for SR-MPLS TE Policy can quickly detect segment list faults. If all the segment lists of the primary path are faulty, SBFD triggers a candidate path HSB switchover to reduce impacts on services.

Pre-configuration Tasks

Before configuring HSB for SR-MPLS TE Policy, configure one or more SR-MPLS TE Policies.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    SR is enabled globally and the Segment Routing view is displayed.

  3. Enable HSB for SR-MPLS TE Policy.

    HSB for SR-MPLS TE Policy supports both global configuration and individual configuration. Select either of the following configuration modes as needed:

    • Global configuration

      1. Run sr-te-policy backup hot-standby enable

        HSB is enabled for all SR-MPLS TE Policies.

    • Individual configuration

      1. Run sr-te policy policy-name

        The SR-MPLS TE Policy view is displayed.

      2. Run backup hot-standby { enable | disable }

        HSB is enabled or disabled for a single SR-MPLS TE Policy.

    If an SR-MPLS TE Policy has both global configuration and individual configuration, the individual configuration takes effect.

  4. Run commit

    The configuration is committed.

Redirecting a Public IPv4 BGP FlowSpec Route to an SR-MPLS TE Policy

Redirecting a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy helps implement more accurate traffic filtering.

Context

In traditional BGP FlowSpec-based traffic optimization, traffic transmitted over paths with the same source and destination nodes can be redirected to only one path, which does not achieve accurate traffic steering. With the function to redirect a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, a device can redirect traffic transmitted over paths with the same source and destination nodes to different SR-MPLS TE Policies.

Redirecting a Public IPv4 BGP FlowSpec Route to an SR-MPLS TE Policy (Manual Configuration)

Manually generate a BGP FlowSpec route and configure redirection rules to redirect the route to an SR-MPLS TE Policy.

Usage Scenario

If no controller is deployed, perform the following operations to manually redirect a public IPv4 BGP Flow Specification route to an SR-MPLS TE Policy:

  1. Manually configure an SR-MPLS TE Policy.
  2. Manually configure a BGP FlowSpec route and define redirection rules. BGP FlowSpec route redirection is based on <Redirection IP address, Color>. If the redirection IP address and color attributes of a BGP FlowSpec route match the endpoint and color attributes of an SR-MPLS TE Policy, the route can be successfully redirected to the SR-MPLS TE Policy.
  3. To enable the device to advertise the BGP FlowSpec route to another device, configure a BGP peer relationship in the BGP-Flow address family.

Prerequisites

Before redirecting a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, complete the following tasks:

Procedure

  • Configure a BGP FlowSpec route.

    1. Run system-view

      The system view is displayed.

    2. Run flow-route flowroute-name

      A static BGP FlowSpec route is created, and the Flow-Route view is displayed.

    3. (Optional) Configure if-match clauses. For details, see "BGP Flow Specification Configuration" in Configuration - Security.
    4. Run apply redirect ip redirect-ip-rt color colorvalue
      The device is enabled to precisely redirect the traffic that matches the if-match clauses to the SR-MPLS TE Policy.

      To enable the device to process the redirection next hop attribute that is received from a peer and configured using the apply redirect ip redirect-ip-rt color colorvalue command, run the peer redirect ip command.

    5. Run commit

      The configuration is committed.

  • (Optional) Configure a BGP peer relationship in the BGP-Flow address family.

    Establish a BGP FlowSpec peer relationship between the ingress of the SR-MPLS TE Policy and the device on which the BGP FlowSpec route is manually generated. If the BGP FlowSpec route is manually generated on the ingress of the SR-MPLS TE Policy, skip this step.

    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run ipv4-family flow

      The BGP-Flow address family view is displayed.

    4. Run peer ipv4-address enable

      The BGP FlowSpec peer relationship is enabled.

      After the BGP FlowSpec peer relationship is established in the BGP-Flow address family view, the manually generated BGP FlowSpec route is automatically imported to the BGP-Flow routing table and then sent to each peer.

    5. Run peer ipv4-address redirect ip

      The device is enabled to process the BGP FlowSpec route's redirection next hop attribute that is received from a peer and configured using the apply redirect ip command.

    6. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring the redirection, verify the configuration.

  • Run the display bgp flow peer [ [ ipv4-address ] verbose ] command to check information about BGP FlowSpec peers.

  • Run the display bgp flow routing-table command to check BGP FlowSpec routing information.

  • Run the display flowspec statistics reindex command to check statistics about traffic transmitted over BGP FlowSpec routes.

Redirecting a Public IPv4 BGP FlowSpec Route to an SR-MPLS TE Policy (Dynamic Delivery by a Controller)

After a controller dynamically delivers a BGP FlowSpec route and an SR-MPLS TE Policy to a forwarder, the forwarder needs to redirect the BGP FlowSpec route to the SR-MPLS TE Policy.

Usage Scenario

If a controller is deployed, the controller can be used to deliver a public IPv4 BGP FlowSpec route and an SR-MPLS TE Policy. The procedure involved in redirecting the public IPv4 BGP FlowSpec route to the SR-MPLS TE Policy is as follows:

  1. Establish a BGP-LS peer relationship between the controller and forwarder, enabling the controller to collect network information, such as topology and label information, through BGP-LS.

  2. Establish a BGP IPv4 SR-MPLS TE Policy peer relationship between the controller and forwarder, enabling the controller to deliver a dynamically computed SR-MPLS TE Policy path to the ingress through the peer relationship. After receiving the SR-MPLS TE Policy, the ingress generates corresponding entries.

  3. Establish a BGP-Flow address family peer relationship between the controller and forwarder, enabling the controller to dynamically deliver a BGP FlowSpec route.

The following procedure focuses on forwarder configurations.

Prerequisites

Before redirecting a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, complete the following tasks:

Procedure

  1. Establish BGP-LS and BGP IPv4 SR-MPLS TE Policy peer relationships. For details, see Configuring a BGP IPv4 SR-MPLS TE Policy Peer Relationship Between a Controller and a Forwarder.
  2. Establish a BGP-Flow address family peer relationship.
    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run ipv4-family flow

      The BGP-Flow address family view is displayed.

    4. Run peer ipv4-address enable

      The BGP FlowSpec peer relationship is enabled.

      After the BGP FlowSpec peer relationship is established in the BGP-Flow address family view, the manually created BGP FlowSpec route is automatically imported to the BGP-Flow routing table and then sent to each BGP FlowSpec peer.

    5. Run peer ipv4-address redirect ip

      The device is enabled to process the BGP FlowSpec route's redirection next hop attribute that is received from a peer and configured using the apply redirect ip command.

    6. Run redirect ip recursive-lookup tunnel [ tunnel-selector tunnel-selector-name ]

      The device is enabled to recurse the received routes that carry the redirection next hop attribute to tunnels.

    7. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring the redirection, verify the configuration.

  • Run the display bgp flow peer [ [ ipv4-address ] verbose ] command to check information about BGP FlowSpec peers.

  • Run the display bgp flow routing-table command to check BGP FlowSpec routing information.

  • Run the display flowspec statistics reindex command to check statistics about traffic transmitted over BGP FlowSpec routes.

Configuring IS-IS TI-LFA FRR

This section describes how to configure IS-IS TI-LFA FRR.

Usage Scenario

With the development of networks, VoIP and on-line video services require high-quality real-time transmission. Nevertheless, if an IS-IS fault occurs, multiple processes, including fault detection, LSP update, LSP flooding, route calculation, and FIB entry delivery, must be performed to switch traffic to a new link. As a result, the traffic interruption time is longer than 50 ms, leading to a failure to satisfy real-time requirements.

TI-LFA FRR provides link and node protection for Segment Routing (SR) tunnels. If a link or node fails, traffic is rapidly switched to a backup path, which minimizes traffic loss.

In some LFA or RLFA scenarios, the P space and Q space do not share nodes or have direct neighbors. If a link or node fails, no backup path can be calculated, causing traffic loss and resulting in a failure to meet reliability requirements. In this situation, TI-LFA can be used.

Pre-configuration Tasks

Before configuring IS-IS TI-LFA FRR, complete the following tasks:

  • Configure addresses for interfaces to ensure that neighboring devices are reachable at the network layer.

  • Configure basic IPv4 IS-IS functions.

  • Globally enable the Segment Routing capability.
  • Enable the Segment Routing capability in an IS-IS process.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run isis [ process-id ]

    An IS-IS process is created, and the IS-IS view is displayed.

  3. Run frr

    The IS-IS FRR view is displayed.

  4. Run loop-free-alternate [ level-1 | level-2 | level-1-2 ]

    IS-IS LFA is enabled, and LFA links can be generated.

  5. Run ti-lfa [ remote-srlg ] [ level-1 | level-2 | level-1-2 ]

    IS-IS TI-LFA is enabled.

  6. (Optional) Run inter-level-protect level-1 [ prefer ]

    Inter-level protection is enabled in IS-IS Level-1.

    By default, IS-IS TI-LFA computes backup paths only in the same IS-IS level. After the inter-level-protect level-1 command is run, if no TI-LFA backup path exists in IS-IS Level-1, inter-level TI-LFA backup path computation is performed.

    If the prefer parameter is specified, an inter-level TI-LFA backup path is preferentially selected even if there is a TI-LFA backup path in IS-IS Level-1.

  7. (Optional) Run tiebreaker { node-protecting | lowest-cost | srlg-disjoint | hold-max-cost } preference preference [ level-1 | level-2 | level-1-2 ]

    An IS-IS TI-LFA FRR tiebreaker is configured for backup path computation.

    A larger value indicates a higher preference.

    Before configuring the srlg-disjoint parameter, you need to run the isis srlg srlg-value command in the IS-IS interface view to configure the IS-IS SRLG function.

  8. (Optional) After completing the preceding configuration, IS-IS TI-LFA is enabled on all IS-IS interfaces. If you do not want to enable IS-IS TI-LFA on some interfaces, perform the following operations:
    1. Run quit

      Exit the IS-IS FRR view.

    2. Run quit

      Exit the IS-IS view.

    3. Run interface interface-type interface-number

      The interface view is displayed.

    4. Run isis [ process-id process-id ] ti-lfa disable [ level-1 | level-2 | level-1-2 ]

      TI-LFA is disabled on the interface.

  9. If a network fault occurs or is rectified, an IGP performs route convergence. A transient forwarding status inconsistency between nodes results in different convergence rates on devices, posing the risk of microloops. To prevent microloops, perform the following steps:

    1. Run quit

      Exit the interface view.

    2. Run isis [ process-id ]

      The IS-IS process is created, and the IS-IS view is displayed.

    3. Run avoid-microloop frr-protected

      IS-IS local microloop avoidance is enabled.

    4. (Optional) Run avoid-microloop frr-protected rib-update-delay rib-update-delay

      The delay after which IS-IS delivers routes is configured.

    5. Run avoid-microloop segment-routing

      IS-IS remote microloop avoidance is enabled.

    6. (Optional) Run avoid-microloop segment-routing rib-update-delay rib-update-delay

      The delay in delivering IS-IS route in an SR scenario is set.

  10. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring IS-IS TI-LFA FRR, run the display isis route [ process-id ] [ level-1 | level-2 ] [ verbose ] command to check information about the primary and backup links.

Configuring OSPF TI-LFA FRR

This section describes how to configure OSPF TI-LFA FRR.

Usage Scenario

For some large networks, especially for the networks where the P space and Q space neither intersect nor have directly connected neighbors, if a link or node fails, LFA and RLFA cannot compute a backup path, causing traffic loss and failing to meet reliability requirements. To resolve this issue, TI-LFA is introduced.

TI-LFA FRR provides link and node protection for SR tunnels. If a link or node fails, it enables traffic to be rapidly switched to a backup path, minimizing traffic loss.

Pre-configuration Tasks

Before configuring OSPF TI-LFA FRR, complete the following tasks:

  • Configure addresses for interfaces to ensure that neighboring devices are reachable at the network layer.

  • Configure basic OSPF functions

  • Enable SR globally.
  • Enable SR for the corresponding OSPF process.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run ospf [ process-id ]

    An OSPF process is created, and the OSPF view is displayed.

  3. Run frr

    The OSPF FRR view is displayed.

  4. Run loop-free-alternate

    OSPF LFA is enabled, and LFA links can be generated.

  5. Run ti-lfa enable

    OSPF TI-LFA is enabled.

  6. (Optional) Run tiebreaker { node-protecting | lowest-cost | ldp-sync hold-max-cost | srlg-disjoint } preference preference

    An OSPF TI-LFA FRR tiebreaker for backup path computation is configured.

    A larger value indicates a higher preference.

    Before configuring the srlg-disjoint parameter, you need to run the ospf srlg srlg-value command in the OSPF interface view to configure the OSPF SRLG function.

  7. (Optional) After completing the preceding configuration, OSPF TI-LFA is enabled on all OSPF interfaces. If you do not want to enable OSPF TI-LFA on some interfaces, perform the following operations:
    1. Run quit

      Exit the OSPF FRR view.

    2. Run quit

      Exit the OSPF view.

    3. Run interface interface-type interface-number

      The interface view is displayed.

    4. To disable the OSPF TI-LFA on a specified interface, run either of the following commands:

      • For a common interface, run the ospf ti-lfa disable command.
      • For a multi-area interface, run the ospf ti-lfa disable multi-area area-id command.

  8. If a network fault occurs or is rectified, an IGP performs route convergence. A transient forwarding status inconsistency between nodes results in different convergence rates on devices, posing the risk of microloops. To prevent microloops, perform the following steps:

    1. Run quit

      Exit the interface view.

    2. Run ospf [ process-id ]

      An OSPF process is created, and the OSPF view is displayed.

    3. Run avoid-microloop frr-protected

      OSPF local microloop avoidance is enabled.

    4. Run avoid-microloop frr-protected rib-update-delay rib-update-delay

      The delay after which OSPF delivers routes is configured.

    5. Run avoid-microloop segment-routing

      OSPF remote microloop avoidance is enabled.

    6. (Optional) Run avoid-microloop segment-routing rib-update-delay rib-update-delay

      A delay in delivering OSPF routes in an SR scenario is set.

  9. Run commit

    The configuration is committed.

Verifying the Configuration

After completing all OSPF TI-LFA FRR configurations, run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask | mask-length ] ] command to check OSPF SR routing table information.

Configuring a Device as the Egress of an MPLS-in-UDP Tunnel

This section describes how to configure a device as the egress of an MPLS-in-UDP tunnel.

Usage Scenario

MPLS in UDP is a DCN overlay technology that encapsulates MPLS or SR-MPLS packets into UDP packets, allowing the packets to traverse some networks that do not support MPLS or SR-MPLS.

A device can be configured only as the egress of an MPLS-in-UDP tunnel to properly process MPLS-in-UDP packets. Running the mpls-in-udp command does not trigger the establishment of an MPLS-in-UDP tunnel.

Pre-configuration Tasks

Before configuring a device as the egress of an MPLS-in-UDP tunnel, configure an SR-MPLS tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls-in-udp

    The MPLS-in-UDP capability is enabled, and the MPLS-in-UDP view is displayed.

  3. Run source-ip-validate list destination-ip ip-addr

    A destination IP address is specified for the IP address verification list, and the MPLS-in-UDP-List view is displayed.

  4. Run source-ip ip-addr

    A source IP address is specified for the IP address verification list.

  5. Run validate-list enable

    The device is enabled to verify the source IP address mapped to the specified destination IP address.

    Source address verification secures MPLS-in-UDP tunnels. After the validate-list enable command is run and the device receives an MPLS-in-UDP packet, the device verifies the source IP address. The device discards the packet if the verification fails.

  6. Run commit

    The configuration is committed.

Configuring a Device as the Egress of an MPLS-in-UDP6 Tunnel

This section describes how to configure a device as the egress of an MPLS-in-UDP6 tunnel.

Usage Scenario

MPLS in UDP6 is a DCN overlay technology that encapsulates MPLS or SR-MPLS packets into UDP packets, allowing the packets to traverse some networks that do not support MPLS or SR-MPLS.

A device can be configured only as the egress of an MPLS-in-UDP6 tunnel to properly process MPLS-in-UDP6 packets. Running the mpls-in-udp6 command does not trigger the establishment of an MPLS-in-UDP6 tunnel.

Pre-configuration Tasks

Before configuring a device as the egress of an MPLS-in-UDP6 tunnel, configure an SR-MPLS tunnel.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mpls-in-udp6

    The MPLS-in-UDP6 capability is enabled.

  3. Run commit

    The configuration is committed.

Configuring an SR-MPLS TTL Processing Mode

SR-MPLS supports both uniform and pipe modes for TTL processing.

Context

When IP packets pass through an MPLS network, the TTL fields in an MPLS packet transmitted over an SR-MPLS public network tunnel are processed in either of the following modes:

  • Uniform mode: The ingress decrements the IP TTL by 1 and maps it to the MPLS TTL field. Then, the packet is processed in standard TTL processing mode on the MPLS network. After receiving the packet, the egress decrements the MPLS TTL by 1, compares it with the IP TTL, and then maps the smaller of these two values to the IP TTL field.
  • Pipe mode: The ingress decrements the IP TTL by 1 and sets the MPLS TTL to a fixed value. Then, the packet is processed in standard TTL processing mode on the MPLS network. After receiving the packet, the egress decrements the IP TTL by 1. To summarize, the IP TTL in a packet passing through an MPLS network is decremented by 1 only on the ingress and egress, regardless of how many hops exist between the ingress and egress.

This configuration applies to SR-MPLS BE and SR-MPLS TE tunnels as well as SR-MPLS TE Policies.

Procedure

  1. Run the system-view command to enter the system view.
  2. Run the mpls command to enter the MPLS view.
  3. Run the mpls sr ttl-mode { pipe | uniform } command to configure a TTL processing mode for SR-MPLS tunnels established based on node labels.
  4. Run the mpls sr adjacency ttl-mode { pipe | uniform } command to configure a TTL processing mode for SR-MPLS tunnels established based on adjacency labels.
  5. Run the commit command to commit the configuration.

Maintaining SR-MPLS

This section describes SR-MPLS maintenance functions.

Configuring SR-MPLS BE Traffic Statistics Collection

This section describes how to configure SR LSP (that is, SR-MPLS BE tunnel) traffic statistics collection.

Context

The SR-MPLS BE traffic statistics collection function helps you collect statistics about traffic forwarded by an SR-MPLS BE tunnel.

Procedure

  1. Run the system-view command to enter the system view.
  2. Run the segment-routing command to enter the Segment Routing view.
  3. Run the traffic-statistics enable host [ ip-prefix ip-prefix-name ] command to enable SR-MPLS BE traffic statistics collection.
  4. Run the commit command to commit the configuration.

Verifying the Configuration

After configuring SR-MPLS BE traffic statistics collection, verify the configuration.
  • Run the display segment-routing traffic-statistics [ ip-address mask-length ] [ verbose ] command to check SR-MPLS BE traffic statistics.

Follow-up Procedure

To clear traffic statistics before re-collection, run the reset segment-routing traffic-statistics [ ip-address mask-length ] command.

Configuring SR-MPLS Flex-Algo LSP Traffic Statistics Collection

This section describes how to configure SR-MPLS Flex-Algo LSP traffic statistics collection.

Context

Traffic statistics collection for an SR-MPLS Flex-Algo LSP helps you collect statistics about the traffic forwarded through the SR-MPLS Flex-Algo LSP.

Procedure

  1. Run the system-view command to enter the system view.
  2. Run the segment-routing command to enter the Segment Routing view.
  3. Run the traffic-statistics flex-algo exclude { flex-algo-begin [ to flex-algo-end ] } &<1-10> command to enable traffic statistics collection for a specified SR-MPLS Flex-Algo LSP.

    By default, traffic statistics collection is enabled for all SR-MPLS Flex-Algo LSPs. You can use flex-algo-begin [ to flex-algo-end ] to exclude some Flex-Algo LSPs that do not require traffic statistics collection.

  4. Run the commit command to commit the configuration.

Verifying the Configuration

After configuring SR-MPLS Flex-Algo LSP traffic statistics collection, verify the configuration.
  • Run the display segment-routing traffic-statistics [ ip-address mask-length ] [ flex-algo [ flexAlgoId ] ] [ verbose ] command to check SR-MPLS BE traffic statistics.

Follow-up Procedure

To clear SR-MPLS Flex-Algo LSP traffic statistics before re-collection, run the reset segment-routing traffic-statistics [ ip-address mask-length ] flex-algo [ flexAlgoId ] command.

Configuring SR-MPLS TE Policy Traffic Statistics Collection

This section describes how to configure SR-MPLS TE Policy traffic statistics collection.

Context

Traffic statistics collection for an SR-MPLS TE Policy helps you collect statistics about traffic forwarded through the SR-MPLS TE Policy.

This function supports both global configuration and individual configuration. If global configuration and individual configuration both exist, the individual configuration takes effect.

Perform the following steps on the router:

Procedure

  • Global configuration
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      The Segment Routing view is displayed.

    3. Run sr-te-policy traffic-statistics enable

      Traffic statistics collection is enabled for all SR-MPLS TE Policies.

    4. Run commit

      The configuration is committed.

  • Individual configuration
    1. Run system-view

      The system view is displayed.

    2. Run segment-routing

      The Segment Routing view is displayed.

    3. Run sr-te policy policy-name

      The SR-MPLS TE Policy view is displayed.

    4. Run traffic-statistics (SR-MPLS TE Policy view) { enable | disable }

      Traffic statistics collection is enabled for a specified SR-MPLS TE Policy.

    5. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring SR-MPLS TE Policy traffic statistics collection, run the display sr-te policy traffic-statistics [ endpoint ipv4-address color color-value | policy-name name-value | binding-sid binding-sid ] command to check collected SR-MPLS TE Policy traffic statistics.

Follow-up Procedure

To clear existing SR Policy traffic statistics, run the reset sr-te policy traffic-statistics [ endpoint ipv4-address color color-value | policy-name name-value | binding-sid binding-sid ] command.

Configuring SR-MPLS TE Policy-related Alarm Thresholds

You can configure SR-MPLS TE Policy-related alarm thresholds, enabling the device to generate an alarm when the number of SR-MPLS TE Policies or segment lists reaches the specified threshold. This facilitates O&M.

Context

If the number of SR-MPLS TE Policies or segment lists exceeds the upper limit, adding new SR-MPLS TE Policies or segment lists may affect existing services. To prevent this, you can configure alarm thresholds for the number of SR-MPLS TE Policies and the number of segment lists. In this way, when the number of SR-MPLS TE Policies or segment lists reaches the specified threshold, an alarm is generated to notify users to handle the problem.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run segment-routing

    Segment Routing is enabled, and the Segment Routing view is displayed.

  3. Run sr-te-policy segment-list-number threshold-alarm upper-limit upperLimitValue lower-limit lowerLimitValue

    An alarm threshold is configured for the number of segment lists.

  4. Run sr-te-policy policy-number threshold-alarm upper-limit upperLimitValue lower-limit lowerLimitValue

    An alarm threshold is configured for the number of SR-MPLS TE Policies.

  5. Run commit

    The configuration is committed.

Configuring Examples for SR-MPLS BE

This section provides several configuration examples of SR-MPLS BE.

Example for Configuring L3VPN over IS-IS SR-MPLS BE

L3VPN services are configured to allow users within the same VPN to securely access each other.

Networking Requirements

On the network shown in Figure 1-2652:
  • CE1 and CE2 belong to vpna.

  • The VPN-target attribute of vpna is 111:1.

L3VPN services recurse to an IS-IS SR-MPLS BE tunnel to allow users within the same VPN to securely access each other. Since multiple links exist between PEs on a public network, traffic needs to be balanced on the public network.

Figure 1-2652 L3VPN recursive to an IS-IS SR-MPLS BE tunnel

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Notes

During the configuration process, note the following:

After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface, such as IP address and routing protocol configurations, are automatically deleted. Add these configurations again if necessary.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Configure MPLS and Segment Routing on the backbone network and establish SR LSPs. Enable TI-LFA FRR.

  3. Configure IPv4 address family VPN instances on the PEs and bind each interface that connects a PE to a CE to a VPN instance.

  4. Establish an MP-IBGP peer relationship between the PEs for them to exchange routing information.

  5. Establish EBGP peer relationships between the PEs and CEs for them to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • vpna's VPN-target and RD

  • SRGB ranges on the PEs and P

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure Segment Routing on the backbone network and enable TI-LFA FRR.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] frr
    [*PE1-isis-1-frr] loop-free-alternate level-1
    [*PE1-isis-1-frr] ti-lfa level-1
    [*PE1-isis-1-frr] quit
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P1-isis-1] frr
    [*P1-isis-1-frr] loop-free-alternate level-1
    [*P1-isis-1-frr] ti-lfa level-1
    [*P1-isis-1-frr] quit
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] frr
    [*PE2-isis-1-frr] loop-free-alternate level-1
    [*PE2-isis-1-frr] ti-lfa level-1
    [*PE2-isis-1-frr] quit
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P2-isis-1] frr
    [*P2-isis-1-frr] loop-free-alternate level-1
    [*P2-isis-1-frr] ti-lfa level-1
    [*P2-isis-1-frr] quit
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] quit
    [*P2] commit

    # After the configuration is complete, run the display tunnel-info all command on each PE. The command output shows that the SR LSPs have been established. The following example uses the command output on PE1.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status
    ----------------------------------------------------------------------------------------
    0x000000002900000003 srbe-lsp            4.4.4.9                                 UP  
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP  
    0x000000002900000005 srbe-lsp            3.3.3.9                                 UP 

    # Use Ping to detect SR LSP connectivity on PE1, for example:

    [~PE1] ping lsp segment-routing ip 3.3.3.9 32 version draft2
      LSP PING FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 : 100  data bytes, press CTRL_C to break
        Reply from 3.3.3.9: bytes=100 Sequence=1 time=12 ms
        Reply from 3.3.3.9: bytes=100 Sequence=2 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=3 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=4 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=5 time=5 ms
    
      --- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 5/6/12 ms

  5. Establish an MP-IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  6. Configure VPN instances in the IPv4 address family on each PE and connect each PE to a CE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Assign an IP address to each interface on CEs as shown in Figure 1-2652. The detailed configuration procedure is not provided here. For details, see Configuration Files.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

  7. Configure a tunnel policy on each PE to preferentially select an SR LSP.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
    [*PE1-tunnel-policy-p1] quit
    [*PE1] commit
    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
    [*PE2-tunnel-policy-p1] quit
    [*PE2] commit
    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  8. Establish EBGP peer relationships between the PEs and CEs.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1, and are not provided here. For details, see "Configuration Files".

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The procedure for configuring PE2 is similar to the procedure for configuring PE1, and the detailed configuration is not provided here. For details, see "Configuration Files".

    After the configuration, run the display bgp vpnv4 vpn-instance peer command on the PEs, and you can view that BGP peer relationships between PEs and CEs have been established and are in the Established state.

    In the following example, the peer relationship between PE1 and CE1 is used.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  11     9          0     00:06:37   Established  1

  9. Verify the configuration.

    # Run the display ip routing-table vpn-instance command on each PE to view the routes to CEs' loopback interfaces.

    In the following, the command output on PE1 is used.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
       10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         GigabitEthernet1/0/0
                        IBGP   255  0        RD    3.3.3.9         GigabitEthernet3/0/0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    CEs within the same VPN can ping each other. For example, CE1 successfully pings CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 2
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 2
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return
  • CE1 configuration file

    #
     sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     network 10.11.1.1 255.255.255.255
     #
     ipv4-family unicast
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
     sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     network 10.22.2.2 255.255.255.255
     #
     ipv4-family unicast
      peer 10.2.1.2 enable
    #
    return

Example for Configuring Path Calculation Based on Different Flex-Algos in L3VPN over IS-IS SR-MPLS Flex-Algo LSP Scenarios

This section provides an example for configuring IS-IS SR-MPLS Flex-Algo LSPs to meet the path customization requirements of L3VPN users.

Networking Requirements

On the network shown in Figure 1-2653:
  • CE1 and CE2 belong to vpna.

  • The VPN target of vpna is 111:1.

To ensure secure communication between CE1 and CE2, configure L3VPN over IS-IS SR-MPLS Flex-Algo LSP. Though PE1 and PE2 have multiple links in between, the service traffic needs to be forwarded over the specified link PE1 <-> P1 <-> PE2.

In this example, different Flex-Algos are defined to meet the service requirements of vpna.

Figure 1-2653 Networking for path calculation based on different Flex-Algos in L3VPN over IS-IS SR-MPLS Flex-Algo LSP scenarios

Interfaces 1 through 3 in this example stand for GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Notes

When performing configurations, note the following:

After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface, such as IP address and routing protocol configurations, are automatically deleted. Add these configurations again if necessary.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  3. Enable MPLS on the backbone network.

  4. Configure FADs.
  5. Configure SR and enable IS-IS to advertise Flex-Algos for the establishment of Flex-Algo LSPs and common SR LSPs.

  6. Configure the color extended community attribute for routes on PEs. This example uses the export policy to set the color extended community attribute for route advertisement, but you can alternatively use the import policy.
  7. Establish an MP-IBGP peer relationship between the PEs.

  8. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the interface that connects each PE to a CE to the VPN instance on that PE.
  9. Configure the mapping between the color extended community attribute and Flex-Algo.
  10. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.
  11. Establish an EBGP peer relationship between each pair of a CE and a PE.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • VPN target and RD of vpna

  • SRGB ranges on the PEs and Ps

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. The following example uses IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure FADs.

    # Configure PE1.

    [~PE1] flex-algo identifier 128 
    [*PE1-flex-algo-128] priority 100   
    [*PE1-flex-algo-128] metric-type igp
    [*PE1-flex-algo-128] quit
    [*PE1] flex-algo identifier 129     
    [*PE1-flex-algo-129] priority 100
    [*PE1-flex-algo-129] metric-type igp
    [*PE1-flex-algo-129] quit
    [*PE1] commit

    # Configure P1.

    [~P1] flex-algo identifier 128 
    [*P1-flex-algo-128] priority 100   
    [*P1-flex-algo-128] metric-type igp
    [*P1-flex-algo-128] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] flex-algo identifier 128 
    [*PE2-flex-algo-128] priority 100   
    [*PE2-flex-algo-128] metric-type igp
    [*PE2-flex-algo-128] quit
    [*PE2] flex-algo identifier 129     
    [*PE2-flex-algo-129] priority 100
    [*PE2-flex-algo-129] metric-type igp
    [*PE2-flex-algo-129] quit
    [*PE2] commit

    # Configure P2.

    [~P2] flex-algo identifier 129     
    [*P2-flex-algo-129] priority 100
    [*P2-flex-algo-129] metric-type igp
    [*P2-flex-algo-129] quit
    [*P2] commit

  5. Configure SR on the backbone network and enable IS-IS to advertise Flex-Algos.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] advertise link attributes
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE1-isis-1] flex-algo 128 level-1
    [*PE1-isis-1] flex-algo 129 level-1
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] isis prefix-sid index 110 flex-algo 128
    [*PE1-LoopBack1] isis prefix-sid index 150 flex-algo 129
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] advertise link attributes
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P1-isis-1] flex-algo 128 level-1
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] isis prefix-sid index 220 flex-algo 128
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] advertise link attributes
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE2-isis-1] flex-algo 128 level-1
    [*PE2-isis-1] flex-algo 129 level-1
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] isis prefix-sid index 330 flex-algo 128
    [*PE2-LoopBack1] isis prefix-sid index 390 flex-algo 129
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] advertise link attributes
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P2-isis-1] flex-algo 129 level-1
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] isis prefix-sid index 440 flex-algo 129
    [*P2-LoopBack1] quit
    [*P2] commit

    # After the configuration is complete, run the display tunnel-info all command on each PE. The command output shows that the SR LSPs have been established. The following example uses the command output on PE1 and PE2.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000003 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            4.4.4.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            3.3.3.9                                 UP                                                 
    0x000000009300000009 flex-algo-lsp       3.3.3.9                                 UP                                                 
    0x00000000930000000a flex-algo-lsp       3.3.3.9                                 UP                                                 
    0x00000000930000000b flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x00000000930000000c flex-algo-lsp       4.4.4.9                                 UP 
    [~PE2] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            1.1.1.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            4.4.4.9                                 UP                                                 
    0x00000000930000000b flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x00000000930000000c flex-algo-lsp       4.4.4.9                                 UP                                                 
    0x00000000930000000d flex-algo-lsp       1.1.1.9                                 UP                                                 
    0x00000000930000000e flex-algo-lsp       1.1.1.9                                 UP 

    # Run the display segment-routing prefix mpls forwarding flex-algo command to check the Flex-Algo-based SR label forwarding table.

    [~PE1] display segment-routing prefix mpls forwarding flex-algo                                                                      
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      16330      GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    1.1.1.9/32         16150      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    3.3.3.9/32         16390      16390      GE1/0/0           172.18.1.2       I&T   ---       1500    Active       129                
    4.4.4.9/32         16440      3          GE1/0/0           172.18.1.2       I&T   ---       1500    Active       129                
    
    Total information(s): 6 
    [~P1] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16200      3          GE1/0/0           172.16.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    3.3.3.9/32         16330      3          GE2/0/0           172.17.1.2       I&T   ---       1500    Active       128                
    
    Total information(s): 3 
    [~P2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16300      3          GE1/0/0           172.18.1.1       I&T   ---       1500    Active       129                
    3.3.3.9/32         16390      3          GE2/0/0           172.19.1.2       I&T   ---       1500    Active       129                
    4.4.4.9/32         16440      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    
    Total information(s): 3  
    [~PE2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      16110      GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    1.1.1.9/32         16150      16150      GE1/0/0           172.19.1.1       I&T   ---       1500    Active       129                
    3.3.3.9/32         16390      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    4.4.4.9/32         16440      3          GE1/0/0           172.19.1.1       I&T   ---       1500    Active       129                
    
    Total information(s): 6  

  6. Configure route-policies.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color100 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:100
    [*PE2-route-policy] quit
    [*PE2] commit

  7. Establish an MP-IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 export
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color100 export
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  8. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the interface that connects each PE to a CE to the VPN instance on that PE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Assign an IP address to each interface on the CEs, as shown in Figure 1-2653. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, you need to use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE connected to the remote PE. Otherwise, the ping operation may fail.

  9. Configure the mapping between the color extended community attribute and Flex-Algo.

    # Configure PE1.

    [~PE1] flex-algo color-mapping
    [*PE1-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE1-flex-algo-color-mapping] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] flex-algo color-mapping
    [*PE2-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE2-flex-algo-color-mapping] quit
    [*PE2] commit

  10. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  11. Establish an EBGP peer relationship between each PE and its connected CE.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see Configuration Files in this section.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  11     9          0     00:06:37   Established  1

  12. Verify the configuration.

    Run the display ip routing-table vpn-instance command on each PE. The command output shows the routes to CE loopback interfaces.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
       10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         GigabitEthernet3/0/0
         127.0.0.0/8    Direct 0    0        D     127.0.0.1       InLoopBack0 
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the ping command. The command output shows that CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break                                                                             
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=252 time=5 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=252 time=4 ms                                                                     
    
      --- 10.22.2.2 ping statistics ---                                                                                                 
        5 packet(s) transmitted                                                                                                         
        5 packet(s) received                                                                                                            
        0.00% packet loss                                                                                                               
        round-trip min/avg/max = 3/3/5 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #   
    flex-algo identifier 128
     priority 100
    #
    flex-algo identifier 129
     priority 100
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #            
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1
     flex-algo 129 level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
     isis prefix-sid index 110 flex-algo 128
     isis prefix-sid index 150 flex-algo 129
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls   
    #                                                                                                                                   
    flex-algo identifier 128                                                                                                            
     priority 100           
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
     isis prefix-sid index 220 flex-algo 128 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    flex-algo identifier 128
     priority 100
    #
    flex-algo identifier 129
     priority 100
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1
     flex-algo 129 level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
     isis prefix-sid index 330 flex-algo 128                                                                                            
     isis prefix-sid index 390 flex-algo 129
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls   
    #                                                                                                                                   
    flex-algo identifier 129                                                                                                            
     priority 100           
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 129 level-1 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
     isis prefix-sid index 440 flex-algo 129 
    #
    return
  • CE1 configuration file

    #
     sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     network 10.11.1.1 255.255.255.255
     #
     ipv4-family unicast
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
     sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     network 10.22.2.2 255.255.255.255
     #
     ipv4-family unicast
      peer 10.2.1.2 enable
    #
    return

Example for Configuring Path Calculation Based on Affinity Attributes in L3VPN over IS-IS SR-MPLS Flex-Algo LSP Scenarios

This section provides an example for configuring IS-IS SR-MPLS Flex-Algo LSPs to meet the path customization requirements of L3VPN users.

Networking Requirements

On the network shown in Figure 1-2654:
  • CE1 and CE2 belong to vpna.

  • The VPN target of vpna is 111:1.

To ensure secure communication between CE1 and CE2, configure L3VPN over IS-IS SR-MPLS Flex-Algo LSP. Though PE1 and PE2 have multiple links in between, the service traffic needs to be forwarded over the specified link PE1 <-> P1 <-> PE2.

In this example, the affinity attributes are defined to meet the service requirements of vpna.

Figure 1-2654 Networking for path calculation based on affinity attributes in L3VPN over IS-IS SR-MPLS Flex-Algo LSP scenarios

Interfaces 1 through 3 in this example stand for GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Notes

When performing configurations, note the following:

  • After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface are automatically deleted. Such configurations include IP address and routing protocol configurations, and must be added again if necessary.
  • In this example, the affinity attributes are used. To transmit the affinity attributes between devices, you need to run the traffic-eng command to enable TE for the corresponding IS-IS process.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure interface IP addresses.

  2. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  3. Enable MPLS on the backbone network.

  4. Configure FADs.
  5. Configure SR and enable IS-IS to advertise Flex-Algos for the establishment of Flex-Algo LSPs and common SR LSPs.

  6. Configure the color extended community attribute for routes on PEs. This example uses the export policy to set the color extended community attribute for route advertisement, but you can alternatively use the import policy.
  7. Establish an MP-IBGP peer relationship between the PEs.

  8. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the interface that connects each PE to a CE to the VPN instance on that PE.
  9. Configure the mapping between the color extended community attribute and Flex-Algo.
  10. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.
  11. Establish an EBGP peer relationship between each pair of a CE and a PE.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • VPN target and RD of vpna

  • SRGB ranges on the PEs and Ps

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure Flex-Algo link attributes.

    # Configure PE1.

    [~PE1] te attribute enable            
    [*PE1] path-constraint affinity-mapping
    [*PE1-pc-af-map] attribute green bit-sequence 1
    [*PE1-pc-af-map] attribute red bit-sequence 9
    [*PE1-pc-af-map] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] te link-attribute-application flex-algo  
    [*PE1-GigabitEthernet1/0/0-te-link-attribute-application] link administrative group name red      
    [*PE1-GigabitEthernet1/0/0-te-link-attribute-application] quit
    [*PE1-GigabitEthernet1/0/0] quit 
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] te link-attribute-application flex-algo
    [*PE1-GigabitEthernet3/0/0-te-link-attribute-application] link administrative group name green
    [*PE1-GigabitEthernet3/0/0-te-link-attribute-application] quit                           
    [*PE1-GigabitEthernet3/0/0] quit                                       
    [*PE1] commit

    # Configure P1.

    [~P1] te attribute enable            
    [*P1] path-constraint affinity-mapping
    [*P1-pc-af-map] attribute green bit-sequence 1
    [*P1-pc-af-map] attribute red bit-sequence 9
    [*P1-pc-af-map] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] te link-attribute-application flex-algo  
    [*P1-GigabitEthernet1/0/0-te-link-attribute-application] link administrative group name green      
    [*P1-GigabitEthernet1/0/0-te-link-attribute-application] quit
    [*P1-GigabitEthernet1/0/0] quit 
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] te link-attribute-application flex-algo
    [*P1-GigabitEthernet2/0/0-te-link-attribute-application] link administrative group name green
    [*P1-GigabitEthernet2/0/0-te-link-attribute-application] quit                           
    [*P1-GigabitEthernet2/0/0] quit                                       
    [*P1] commit

    # Configure PE2.

    [~PE2] te attribute enable            
    [*PE2] path-constraint affinity-mapping
    [*PE2-pc-af-map] attribute green bit-sequence 1
    [*PE2-pc-af-map] attribute red bit-sequence 9
    [*PE2-pc-af-map] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] te link-attribute-application flex-algo  
    [*PE2-GigabitEthernet1/0/0-te-link-attribute-application] link administrative group name red      
    [*PE2-GigabitEthernet1/0/0-te-link-attribute-application] quit
    [*PE2-GigabitEthernet1/0/0] quit 
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] te link-attribute-application flex-algo
    [*PE2-GigabitEthernet3/0/0-te-link-attribute-application] link administrative group name green
    [*PE2-GigabitEthernet3/0/0-te-link-attribute-application] quit                           
    [*PE2-GigabitEthernet3/0/0] quit                                       
    [*PE2] commit

    # Configure P2.

    [~P2] te attribute enable            
    [*P2] path-constraint affinity-mapping
    [*P2-pc-af-map] attribute green bit-sequence 1
    [*P2-pc-af-map] attribute red bit-sequence 9
    [*P2-pc-af-map] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] te link-attribute-application flex-algo  
    [*P2-GigabitEthernet1/0/0-te-link-attribute-application] link administrative group name red      
    [*P2-GigabitEthernet1/0/0-te-link-attribute-application] quit
    [*P2-GigabitEthernet1/0/0] quit 
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] te link-attribute-application flex-algo
    [*P2-GigabitEthernet2/0/0-te-link-attribute-application] link administrative group name red
    [*P2-GigabitEthernet2/0/0-te-link-attribute-application] quit                           
    [*P2-GigabitEthernet2/0/0] quit                                       
    [*P2] commit

  5. Configure FADs.

    # Configure PE1.

    [~PE1] flex-algo identifier 128 
    [*PE1-flex-algo-128] priority 100   
    [*PE1-flex-algo-128] affinity include-all green
    [*PE1-flex-algo-128] quit
    [*PE1] commit

    # Configure P1.

    [~P1] flex-algo identifier 128 
    [*P1-flex-algo-128] priority 100   
    [*P1-flex-algo-128] affinity include-all green
    [*P1-flex-algo-128] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] flex-algo identifier 128 
    [*PE2-flex-algo-128] priority 100   
    [*PE2-flex-algo-128] affinity include-all green
    [*PE2-flex-algo-128] quit
    [*PE2] commit

    # Configure P2.

    [~P2] flex-algo identifier 128     
    [*P2-flex-algo-128] priority 100
    [*P2-flex-algo-128] affinity include-all green
    [*P2-flex-algo-128] quit
    [*P2] commit

  6. Configure SR on the backbone network and enable IS-IS to advertise Flex-Algos.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] advertise link attributes
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE1-isis-1] flex-algo 128 level-1
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] isis prefix-sid index 110 flex-algo 128
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] advertise link attributes
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P1-isis-1] flex-algo 128 level-1
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] isis prefix-sid index 220 flex-algo 128
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] advertise link attributes
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE2-isis-1] flex-algo 128 level-1
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] isis prefix-sid index 330 flex-algo 128
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] advertise link attributes
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P2-isis-1] flex-algo 128 level-1
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] isis prefix-sid index 440 flex-algo 128
    [*P2-LoopBack1] quit
    [*P2] commit

    # After the configuration is complete, run the display tunnel-info all command on each PE. The command output shows that the SR LSPs have been established. The following example uses the command output on PE1 and PE2.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000003 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            4.4.4.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            3.3.3.9                                 UP                                                 
    0x000000009300000041 flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x000000009300000042 flex-algo-lsp       3.3.3.9                                 UP 
    [~PE2] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            1.1.1.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            4.4.4.9                                 UP                                                 
    0x000000009300000041 flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x000000009300000042 flex-algo-lsp       1.1.1.9                                 UP 

    # Run the display segment-routing prefix mpls forwarding flex-algo command to check the Flex-Algo-based SR label forwarding table.

    [~PE1] display segment-routing prefix mpls forwarding flex-algo                                                                      
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      16330      GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    
    Total information(s): 3 
    [~P1] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      3          GE1/0/0           172.16.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    3.3.3.9/32         16330      3          GE2/0/0           172.17.1.2       I&T   ---       1500    Active       128                
    
    Total information(s): 3
    [~P2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    4.4.4.9/32         16440      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    
    Total information(s): 1 
    [~PE2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      16110      GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    
    Total information(s): 3 

  7. Configure route-policies.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color100 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:100
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish an MP-IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 export
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color100 export
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  9. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the interface that connects each PE to a CE to the VPN instance on that PE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Assign an IP address to each interface on the CEs, as shown in Figure 1-2654. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, you need to use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE connected to the remote PE. Otherwise, the ping operation may fail.

  10. Configure the mapping between the color extended community attribute and Flex-Algo.

    # Configure PE1.

    [~PE1] flex-algo color-mapping
    [*PE1-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE1-flex-algo-color-mapping] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] flex-algo color-mapping
    [*PE2-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE2-flex-algo-color-mapping] quit
    [*PE2] commit

  11. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  12. Establish an EBGP peer relationship between each PE and its connected CE.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see Configuration Files in this section.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  11     9          0     00:06:37   Established  1

  13. Verify the configuration.

    Run the display ip routing-table vpn-instance command on each PE. The command output shows the routes to CE loopback interfaces.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
       10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         GigabitEthernet3/0/0
         127.0.0.0/8    Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the ping command. The command output shows that CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=252 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=252 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=252 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=252 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=252 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    # 
    te attribute enable 
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #
    path-constraint affinity-mapping                                    
     attribute green bit-sequence 1                                                
     attribute red bit-sequence 9
    #   
    flex-algo identifier 128
     priority 100
     affinity include-all green
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #            
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo
      link administrative group name red 
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1
     te link-attribute-application flex-algo                 
      link administrative group name green 
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
     isis prefix-sid index 110 flex-algo 128
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P1 configuration file

    #
    sysname P1
    # 
    te attribute enable 
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls   
    #
    path-constraint affinity-mapping                                    
     attribute green bit-sequence 1                                                
     attribute red bit-sequence 9
    #                                                                                                                                   
    flex-algo identifier 128                                                                                                            
     priority 100  
     affinity include-all green         
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo                 
      link administrative group name green
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1 
     te link-attribute-application flex-algo                 
      link administrative group name green
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
     isis prefix-sid index 220 flex-algo 128 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    # 
    te attribute enable 
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #
    path-constraint affinity-mapping                                    
     attribute green bit-sequence 1                                                
     attribute red bit-sequence 9
    #               
    flex-algo identifier 128
     priority 100
     affinity include-all green
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo
      link administrative group name red 
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo                 
      link administrative group name green
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
     isis prefix-sid index 330 flex-algo 128                                                                                            
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P2 configuration file

    #
    sysname P2
    # 
    te attribute enable 
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls   
    #
    path-constraint affinity-mapping                                    
     attribute green bit-sequence 1                                                
     attribute red bit-sequence 9
    #                                                                                                                                   
    flex-algo identifier 128                                                                                                            
     priority 100  
     affinity include-all green         
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     advertise link attributes
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
     flex-algo 128 level-1 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo
      link administrative group name red 
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     isis enable 1  
     te link-attribute-application flex-algo      
      link administrative group name red 
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
     isis prefix-sid index 440 flex-algo 128 
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     network 10.11.1.1 255.255.255.255
     #
     ipv4-family unicast
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     network 10.22.2.2 255.255.255.255
     #
     ipv4-family unicast
      peer 10.2.1.2 enable
    #
    return

Example for Configuring L3VPN over OSPF SR-MPLS BE

L3VPN services are configured to allow users within the same VPN to securely access each other.

Networking Requirements

In Figure 1-2655:
  • CE1 and CE2 belong to vpna.

  • The VPN-target attribute of vpna is 111:1.

L3VPN services recurse to an OSPF SR-MPLS BE tunnel to allow users within the same VPN to securely access each other. Since multiple links exist between PEs on a public network, traffic needs to be balanced on the public network.

Figure 1-2655 L3VPN recursive to an OSPF SR-MPLS BE tunnel

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Notes

During the configuration process, note the following:

After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface, such as IP address and routing protocol configurations, are automatically deleted. Add these configurations again if necessary.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure OSPF on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS on the backbone network, configure SR, and establish SR LSPs. Enable TI-LFA FRR.

  3. Configure IPv4 address family VPN instances on the PEs and bind each interface that connects a PE to a CE to a VPN instance.

  4. Establish an MP-IBGP peer relationship between the PEs for them to exchange routing information.

  5. Establish EBGP peer relationships between the PEs and CEs for them to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • vpna's VPN-target and RD

  • SRGB ranges on the PEs and P

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. OSPF is used as an example.

    # Configure PE1.

    [~PE1] ospf 1
    [*PE1-ospf-1] opaque-capability enable
    [*PE1-ospf-1] area 0
    [*PE1-ospf-1-area-0.0.0.0] quit
    [*PE1-ospf-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] ospf enable 1 area 0
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ospf enable 1 area 0
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] ospf 1
    [*P1-ospf-1] opaque-capability enable
    [*P1-ospf-1] area 0
    [*P1-ospf-1-area-0.0.0.0] quit
    [*P1-ospf-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] ospf enable 1 area 0
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ospf enable 1 area 0
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] ospf 1
    [*PE2-ospf-1] opaque-capability enable
    [*PE2-ospf-1] area 0
    [*PE2-ospf-1-area-0.0.0.0] quit
    [*PE2-ospf-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] ospf enable 1 area 0
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ospf enable 1 area 0
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] ospf 1
    [*P2-ospf-1] opaque-capability enable
    [*P2-ospf-1] area 0
    [*P2-ospf-1-area-0.0.0.0] quit
    [*P2-ospf-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] ospf enable 1 area 0
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ospf enable 1 area 0
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where OSPF has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on the backbone network and enable TI-LFA FRR.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] ospf 1
    [*PE1-ospf-1] segment-routing mpls
    [*PE1-ospf-1] segment-routing global-block 16000 23999

    The value range of SRGB changes dynamically, depending on the actual situation of the equipment. Here is an example only.

    [*PE1-ospf-1] frr
    [*PE1-ospf-1-frr] loop-free-alternate
    [*PE1-ospf-1-frr] ti-lfa enable
    [*PE1-ospf-1-frr] quit
    [*PE1-ospf-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] ospf prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] ospf 1
    [*P1-ospf-1] segment-routing mpls
    [*P1-ospf-1] segment-routing global-block 16000 23999

    The value range of SRGB changes dynamically, depending on the actual situation of the equipment. Here is an example only.

    [*P1-ospf-1] frr
    [*P1-ospf-1-frr] loop-free-alternate
    [*P1-ospf-1-frr] ti-lfa enable
    [*P1-ospf-1-frr] quit
    [*P1-ospf-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] ospf prefix-sid index 20
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] ospf 1
    [*PE2-ospf-1] segment-routing mpls
    [*PE2-ospf-1] segment-routing global-block 16000 23999

    The value range of SRGB changes dynamically, depending on the actual situation of the equipment. Here is an example only.

    [*PE2-ospf-1] frr
    [*PE2-ospf-1-frr] loop-free-alternate
    [*PE2-ospf-1-frr] ti-lfa enable
    [*PE2-ospf-1-frr] quit
    [*PE2-ospf-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] ospf prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] ospf 1
    [*P2-ospf-1] segment-routing mpls
    [*P2-ospf-1] segment-routing global-block 16000 23999

    The value range of SRGB changes dynamically, depending on the actual situation of the equipment. Here is an example only.

    [*P2-ospf-1] frr
    [*P2-ospf-1-frr] loop-free-alternate
    [*P2-ospf-1-frr] ti-lfa enable
    [*P2-ospf-1-frr] quit
    [*P2-ospf-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] ospf prefix-sid index 40
    [*P2-LoopBack1] quit
    [*P2] commit

    # After the configuration is complete, run the display tunnel-info all command on each PE. The command output shows that the SR LSPs have been established. The following example uses the command output on PE1.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status
    ----------------------------------------------------------------------------------------
    0x000000002900000003 srbe-lsp            4.4.4.9                                 UP  
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP  
    0x000000002900000005 srbe-lsp            3.3.3.9                                 UP 

    # Using Ping to detect SR LSP connectivity on PE1, for example:

    [~PE1] ping lsp segment-routing ip 3.3.3.9 32 version draft2
      LSP PING FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 : 100  data bytes, press CTRL_C to break
        Reply from 3.3.3.9: bytes=100 Sequence=1 time=256 ms
        Reply from 3.3.3.9: bytes=100 Sequence=2 time=3 ms
        Reply from 3.3.3.9: bytes=100 Sequence=3 time=4 ms
        Reply from 3.3.3.9: bytes=100 Sequence=4 time=4 ms
        Reply from 3.3.3.9: bytes=100 Sequence=5 time=4 ms
    
      --- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 3/54/256 ms

  5. Establish an MP-IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        5        5     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   1

  6. Configure VPN instances in the IPv4 address family on each PE and connect each PE to a CE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Assign an IP address to each interface on CEs as shown in Figure 1-2655. The detailed configuration procedure is not provided here. For details, see "Configuration Files".

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

  7. Configure a tunnel policy on each PE to preferentially select an SR LSP.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
    [*PE1-tunnel-policy-p1] quit
    [*PE1] commit
    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
    [*PE2-tunnel-policy-p1] quit
    [*PE2] commit
    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  8. Establish EBGP peer relationships between the PEs and CEs.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1, and are not provided here. For details, see "Configuration Files".

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The procedure for configuring PE2 is similar to the procedure for configuring PE1, and the detailed configuration is not provided here. For details, see "Configuration Files".

    After the configuration, run the display bgp vpnv4 vpn-instance peer command on the PEs, and you can view that BGP peer relationships between PEs and CEs have been established and are in the Established state.

    In the following example, the peer relationship between PE1 and CE1 is used.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  19     18         0     00:12:39   Established  1

  9. Verify the configuration.

    # Run the display ip routing-table vpn-instance command on each PE to view the routes to CEs' loopback interfaces.

    In the following, the command output on PE1 is used.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
       10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         GigabitEthernet1/0/0
                        IBGP   255  0        RD    3.3.3.9         GigabitEthernet3/0/0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    CEs within the same VPN can ping each other. For example, CE1 successfully pings CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=252 time=428 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=252 time=4 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=252 time=5 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=252 time=3 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=252 time=4 ms
    
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 3/88/428 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     ospf enable 1 area 0.0.0.0  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     ospf enable 1 area 0.0.0.0  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     ospf enable 1 area 0.0.0.0  
     ospf prefix-sid index 10
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate
      ti-lfa enable
     area 0.0.0.0
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 2
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     ospf enable 1 area 0.0.0.0
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     ospf enable 1 area 0.0.0.0 
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     ospf enable 1 area 0.0.0.0  
     ospf prefix-sid index 20
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate
      ti-lfa enable
     area 0.0.0.0
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     ospf enable 1 area 0.0.0.0  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     ospf enable 1 area 0.0.0.0  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     ospf enable 1 area 0.0.0.0  
     ospf prefix-sid index 30
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate
      ti-lfa enable
     area 0.0.0.0
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 2
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     ospf enable 1 area 0.0.0.0  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     ospf enable 1 area 0.0.0.0 
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     ospf enable 1 area 0.0.0.0  
     ospf prefix-sid index 40
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     frr
      loop-free-alternate
      ti-lfa enable
     area 0.0.0.0
    #
    return
  • CE1 configuration file

    #
     sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     network 10.11.1.1 255.255.255.255
     #
     ipv4-family unicast
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
     sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     network 10.22.2.2 255.255.255.255
     #
     ipv4-family unicast
      peer 10.2.1.2 enable
    #
    return

Example for Configuring Path Calculation Based on Different Flex-Algos in L3VPN over OSPF SR-MPLS Flex-Algo LSP Scenarios

This section provides an example for configuring OSPF SR-MPLS Flex-Algo LSPs to meet the path customization requirements of L3VPN users.

Networking Requirements

In Figure 1-2656:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

To ensure secure communication between VPN users, configure L3VPN over OSPF SR-MPLS Flex-Algo LSP. Though PE1 and PE2 have multiple links in between, the service traffic needs to be forwarded over the specified link PE1 <-> P1 <-> PE2.

In this example, different Flex-Algos are defined to meet the service requirements of vpna.

Figure 1-2656 Networking for path calculation based on different Flex-Algos in L3VPN over OSPF SR-MPLS Flex-Algo LSP scenarios

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Notes

During the configuration, note the following:

After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface, such as IP address and routing protocol configurations, are automatically deleted. Add these configurations again if necessary.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure OSPF on the backbone network to ensure that PEs can interwork with each other.

  3. Enable MPLS on the backbone network.

  4. Configure FADs.
  5. Configure SR and enable OSPF to advertise Flex-Algos for the establishment of Flex-Algo LSPs and common SR LSPs.

  6. Configure the color extended community attribute for routes on PEs. This example uses the export policy to set the color extended community attribute for route advertisement, but you can alternatively use the import policy.
  7. Configure MP-IBGP on PEs to exchange routing information.

  8. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connected to a CE to the corresponding VPN instance.
  9. Configure the mapping between the color extended community attribute and Flex-Algo.
  10. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.
  11. Establish an EBGP peer relationship between each pair of a CE and a PE.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • VPN target and RD of vpna

  • SRGB ranges on PEs and Ps

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. OSPF is used as an example.

    # Configure PE1.

    [~PE1] ospf 1
    [*PE1-ospf-1] opaque-capability enable
    [*PE1-ospf-1] area 0
    [*PE1-ospf-1-area-0.0.0.0] quit
    [*PE1-ospf-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] ospf enable 1 area 0
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ospf enable 1 area 0
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] ospf 1
    [*P1-ospf-1] opaque-capability enable
    [*P1-ospf-1] area 0
    [*P1-ospf-1-area-0.0.0.0] quit
    [*P1-ospf-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] ospf enable 1 area 0
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ospf enable 1 area 0
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] ospf 1
    [*PE2-ospf-1] opaque-capability enable
    [*PE2-ospf-1] area 0
    [*PE2-ospf-1-area-0.0.0.0] quit
    [*PE2-ospf-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] ospf enable 1 area 0
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ospf enable 1 area 0
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] ospf 1
    [*P2-ospf-1] opaque-capability enable
    [*P2-ospf-1] area 0
    [*P2-ospf-1-area-0.0.0.0] quit
    [*P2-ospf-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] ospf enable 1 area 0
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ospf enable 1 area 0
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ospf enable 1 area 0
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where OSPF has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure FADs.

    # Configure PE1.

    [~PE1] flex-algo identifier 128 
    [*PE1-flex-algo-128] priority 100   
    [*PE1-flex-algo-128] metric-type igp
    [*PE1-flex-algo-128] quit
    [*PE1] flex-algo identifier 129     
    [*PE1-flex-algo-129] priority 100
    [*PE1-flex-algo-129] metric-type igp
    [*PE1-flex-algo-129] quit
    [*PE1] commit

    # Configure P1.

    [~P1] flex-algo identifier 128 
    [*P1-flex-algo-128] priority 100   
    [*P1-flex-algo-128] metric-type igp
    [*P1-flex-algo-128] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] flex-algo identifier 128 
    [*PE2-flex-algo-128] priority 100   
    [*PE2-flex-algo-128] metric-type igp
    [*PE2-flex-algo-128] quit
    [*PE2] flex-algo identifier 129     
    [*PE2-flex-algo-129] priority 100
    [*PE2-flex-algo-129] metric-type igp
    [*PE2-flex-algo-129] quit
    [*PE2] commit

    # Configure P2.

    [~P2] flex-algo identifier 129     
    [*P2-flex-algo-129] priority 100
    [*P2-flex-algo-129] metric-type igp
    [*P2-flex-algo-129] quit
    [*P2] commit

  5. Configure SR on the backbone network and enable OSPF to advertise Flex-Algos.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] ospf 1
    [*PE1-ospf-1] segment-routing mpls
    [*PE1-ospf-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE1-ospf-1] flex-algo 128
    [*PE1-ospf-1] flex-algo 129
    [*PE1-ospf-1] advertise link-attributes application flex-algo
    [*PE1-ospf-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] ospf prefix-sid index 10
    [*PE1-LoopBack1] ospf prefix-sid index 110 flex-algo 128
    [*PE1-LoopBack1] ospf prefix-sid index 150 flex-algo 129
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] ospf 1
    [*P1-ospf-1] segment-routing mpls
    [*P1-ospf-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P1-ospf-1] flex-algo 128
    [*P1-ospf-1] advertise link-attributes application flex-algo
    [*P1-ospf-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] ospf prefix-sid index 20
    [*P1-LoopBack1] ospf prefix-sid index 220 flex-algo 128
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] ospf 1
    [*PE2-ospf-1] segment-routing mpls
    [*PE2-ospf-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE2-ospf-1] flex-algo 128
    [*PE2-ospf-1] flex-algo 129
    [*PE2-ospf-1] advertise link-attributes application flex-algo
    [*PE2-ospf-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] ospf prefix-sid index 30
    [*PE2-LoopBack1] ospf prefix-sid index 330 flex-algo 128
    [*PE2-LoopBack1] ospf prefix-sid index 390 flex-algo 129
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] ospf 1
    [*P2-ospf-1] segment-routing mpls
    [*P2-ospf-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P2-ospf-1] flex-algo 129
    [*P2-ospf-1] advertise link-attributes application flex-algo
    [*P2-ospf-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] ospf prefix-sid index 40
    [*P2-LoopBack1] ospf prefix-sid index 440 flex-algo 129
    [*P2-LoopBack1] quit
    [*P2] commit

    # After completing the configuration, run the display tunnel-info all command on each PE. The command output shows that SR LSPs have been established. The following example uses the command output on PE1 and PE2.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000003 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            4.4.4.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            3.3.3.9                                 UP                                                 
    0x000000009300000009 flex-algo-lsp       3.3.3.9                                 UP                                                 
    0x00000000930000000a flex-algo-lsp       3.3.3.9                                 UP                                                 
    0x00000000930000000b flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x00000000930000000c flex-algo-lsp       4.4.4.9                                 UP 
    [~PE2] display tunnel-info all
    Tunnel ID            Type                Destination                             Status                                             
    ----------------------------------------------------------------------------------------                                            
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP                                                 
    0x000000002900000005 srbe-lsp            1.1.1.9                                 UP                                                 
    0x000000002900000006 srbe-lsp            4.4.4.9                                 UP                                                 
    0x00000000930000000b flex-algo-lsp       2.2.2.9                                 UP                                                 
    0x00000000930000000c flex-algo-lsp       4.4.4.9                                 UP                                                 
    0x00000000930000000d flex-algo-lsp       1.1.1.9                                 UP                                                 
    0x00000000930000000e flex-algo-lsp       1.1.1.9                                 UP 

    # Run the display segment-routing prefix mpls forwarding flex-algo command to check the Flex-Algo-based SR label forwarding table.

    [~PE1] display segment-routing prefix mpls forwarding flex-algo                                                                      
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      16330      GE3/0/0           172.16.1.2       I&T   ---       1500    Active       128                
    1.1.1.9/32         16150      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    3.3.3.9/32         16390      16390      GE1/0/0           172.18.1.2       I&T   ---       1500    Active       129                
    4.4.4.9/32         16440      3          GE1/0/0           172.18.1.2       I&T   ---       1500    Active       129                
    
    Total information(s): 6 
    [~P1] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16200      3          GE1/0/0           172.16.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    3.3.3.9/32         16330      3          GE2/0/0           172.17.1.2       I&T   ---       1500    Active       128                
    
    Total information(s): 3 
    [~P2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16300      3          GE1/0/0           172.18.1.1       I&T   ---       1500    Active       129                
    3.3.3.9/32         16390      3          GE2/0/0           172.19.1.2       I&T   ---       1500    Active       129                
    4.4.4.9/32         16440      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    
    Total information(s): 3  
    [~PE2] display segment-routing prefix mpls forwarding flex-algo
    
                       Segment Routing Prefix MPLS Forwarding Information                                                               
                 --------------------------------------------------------------                                                         
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit                                                         
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State      Flexalgo             
    -----------------------------------------------------------------------------------------------------------------------             
    1.1.1.9/32         16110      16110      GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    2.2.2.9/32         16220      3          GE3/0/0           172.17.1.1       I&T   ---       1500    Active       128                
    3.3.3.9/32         16330      NULL       Loop1             127.0.0.1        E     ---       1500    Active       128                
    1.1.1.9/32         16150      16150      GE1/0/0           172.19.1.1       I&T   ---       1500    Active       129                
    3.3.3.9/32         16390      NULL       Loop1             127.0.0.1        E     ---       1500    Active       129                
    4.4.4.9/32         16440      3          GE1/0/0           172.19.1.1       I&T   ---       1500    Active       129                
    
    Total information(s): 6  

  6. Configure route-policies.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color100 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:100
    [*PE2-route-policy] quit
    [*PE2] commit

  7. Establish an MP-IBGP peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 export
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color100 export
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After completing the configuration, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  8. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connected to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Assign an IP address to each interface on CEs as shown in Figure 1-2656. The detailed configuration procedure is not provided here. For configuration details, see the configuration files.

    After completing the configuration, run the display ip vpn-instance verbose command on each PE to check the VPN instance configuration. The command output shows that each PE can ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

  9. Configure the mapping between the color extended community attribute and Flex-Algo.

    # Configure PE1.

    [~PE1] flex-algo color-mapping
    [*PE1-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE1-flex-algo-color-mapping] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] flex-algo color-mapping
    [*PE2-flex-algo-color-mapping] color 100 flex-algo 128 
    [*PE2-flex-algo-color-mapping] quit
    [*PE2] commit

  10. Configure a tunnel policy for each PE to use Flex-Algo LSPs as the preferred tunnels.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq flex-algo-lsp load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  11. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see the configuration files.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see the configuration files.

    After completing the configuration, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  11     9          0     00:06:37   Established  1

  12. Verify the configuration.

    After completing the configuration, run the display ip routing-table vpn-instance command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
       10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         GigabitEthernet3/0/0
         127.0.0.0/8    Direct 0    0        D     127.0.0.1       InLoopBack0 
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break                                                                             
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=252 time=5 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=252 time=3 ms                                                                     
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=252 time=4 ms                                                                     
    
      --- 10.22.2.2 ping statistics ---                                                                                                 
        5 packet(s) transmitted                                                                                                         
        5 packet(s) received                                                                                                            
        0.00% packet loss                                                                                                               
        round-trip min/avg/max = 3/3/5 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #   
    flex-algo identifier 128
     priority 100
    #
    flex-algo identifier 129
     priority 100
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #            
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     ospf enable 1 area 0  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     ospf enable 1 area 0  
     ospf prefix-sid index 10
     ospf prefix-sid index 110 flex-algo 128
     ospf prefix-sid index 150 flex-algo 129
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     advertise link-attributes application flex-algo
     flex-algo 128
     flex-algo 129
     area 0.0.0.0
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls   
    #                                                                                                                                   
    flex-algo identifier 128                                                                                                            
     priority 100           
    #               
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     ospf enable 1 area 0  
     ospf prefix-sid index 20
     ospf prefix-sid index 220 flex-algo 128 
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     advertise link-attributes application flex-algo
     flex-algo 128
     area 0.0.0.0
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    flex-algo identifier 128
     priority 100
    #
    flex-algo identifier 129
     priority 100
    #
    flex-algo color-mapping
     color 100 flex-algo 128
    #
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     ospf enable 1 area 0  
     ospf prefix-sid index 30
     ospf prefix-sid index 330 flex-algo 128                                                                                            
     ospf prefix-sid index 390 flex-algo 129
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color100 export
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     advertise link-attributes application flex-algo
     flex-algo 128
     flex-algo 129
     area 0.0.0.0
    #
    route-policy color100 permit node 1                                                                                                 
     apply extcommunity color 0:100
    #
    tunnel-policy p1
     tunnel select-seq flex-algo-lsp load-balance-number 1 unmix 
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls   
    #                                                                                                                                   
    flex-algo identifier 129                                                                                                            
     priority 100           
    #               
    segment-routing 
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     ospf enable 1 area 0   
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     ospf enable 1 area 0 
     ospf prefix-sid index 40
     ospf prefix-sid index 440 flex-algo 129 
    #               
    ospf 1          
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 23999
     advertise link-attributes application flex-algo
     flex-algo 129
     area 0.0.0.0
    #
    return
  • CE1 configuration file

    #
     sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     network 10.11.1.1 255.255.255.255
     #
     ipv4-family unicast
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
     sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     network 10.22.2.2 255.255.255.255
     #
     ipv4-family unicast
      peer 10.2.1.2 enable
    #
    return

Example for Configuring EVPN VPLS over SR-MPLS BE (EVPN Instance in BD Mode)

This section provides an example for configuring an SR-MPLS BE tunnel to carry EVPN VPLS services.

Networking Requirements

To allow different sites to communicate over the backbone network shown in Figure 1-2657, configure EVPN to achieve Layer 2 service transmission. If the sites belong to the same subnet, create an EVPN instance on each PE to store EVPN routes and implement Layer 2 forwarding based on matching MAC addresses. In this example, an SR-MPLS BE tunnel needs to be used to transmit services between the PEs.

Figure 1-2657 EVPN VPLS over SR-MPLS BE networking

Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and GE 1/0/0.1, respectively.


Precautions

During the configuration process, note the following:

  • Using the local loopback address of each PE as the source address of the PE is recommended.

  • In this example, EVPN instances in BD mode need to be configured on the PEs. To achieve this, create a BD on each PE and bind the BD to a specific sub-interface.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other.

  3. Configure an SR-MPLS BE tunnel on the backbone network.

  4. Configure an EVPN instance on each PE.

  5. Configure an EVPN source address on each PE.

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to CEs.

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS BE tunnel.

  8. Establish a BGP EVPN peer relationship between the PEs.

  9. Configure the CEs to communicate with the PEs.

Data Preparation

To complete the configuration, you need the following data:

  • EVPN instance name: evrf1

  • RDs (100:1 and 200:1) and RT (1:1) of the EVPN instance evrf1 on PE1 and PE2

Procedure

  1. Configure addresses for interfaces connecting the PEs and the P according to Figure 1-2657.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check whether an IS-IS neighbor relationship has been established between PE1 and the P and between PE2 and the P. If the Up state is displayed in the command output, the neighbor relationship has been successfully established. You can run the display ip routing-table command to check that the PEs have learned the route to each other's loopback 1 interface.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
    
      System Id     Interface          Circuit Id        State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0            1111.1111.2222.01  Up   8s       L2       64 
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls] commit
    [~P-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

  4. Configure an SR-MPLS BE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure the P.

    [~P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # After completing the configurations, run the display tunnel-info all command on each PE. The command output shows that SR LSPs have been established. The following example uses the command output on PE1.

    [~PE1] display tunnel-info all         
    Tunnel ID            Type                Destination                             Status              
    ---------------------------------------------------------------------------------------- 
    0x000000002900000004 srbe-lsp            2.2.2.2                                 UP             
    0x000000002900000005 srbe-lsp            3.3.3.3                                 UP 

    # Run the ping command on PE1 to check the SR LSP connectivity. For example:

    [~PE1] ping lsp segment-routing ip 3.3.3.3 32 version draft2                          
      LSP PING FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.3/32 : 100  data bytes, press CTRL_C to break      
        Reply from 3.3.3.3: bytes=100 Sequence=1 time=6 ms                               
        Reply from 3.3.3.3: bytes=100 Sequence=2 time=3 ms                                 
        Reply from 3.3.3.3: bytes=100 Sequence=3 time=3 ms                                      
        Reply from 3.3.3.3: bytes=100 Sequence=4 time=3 ms                                        
        Reply from 3.3.3.3: bytes=100 Sequence=5 time=3 ms
      --- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.3/32 ping statistics ---                 
        5 packet(s) transmitted               
        5 packet(s) received   
        0.00% packet loss 
        round-trip min/avg/max = 3/3/6 ms 

  5. Configure an EVPN instance on each PE.

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] route-distinguisher 100:1
    [*PE1-evpn-instance-evrf1] vpn-target 1:1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] bridge-domain 10
    [*PE1-bd10] evpn binding vpn-instance evrf1
    [*PE1-bd10] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 1:1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] bridge-domain 10
    [*PE2-bd10] evpn binding vpn-instance evrf1
    [*PE2-bd10] quit
    [*PE2] commit

  6. Configure an EVPN source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.1
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.3
    [*PE2] commit

  7. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

    # Configure PE1.

    [~PE1] interface GigabitEthernet 1/0/0
    [*PE1-Gigabitethernet1/0/0] undo shutdown
    [*PE1-Gigabitethernet1/0/0] quit
    [*PE1] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE1-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE1-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE1-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE1-GigabitEthernet 1/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface GigabitEthernet 1/0/0
    [*PE2-Gigabitethernet1/0/0] undo shutdown
    [*PE2-Gigabitethernet1/0/0] quit
    [*PE2] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE2-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE2-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE2-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE2-GigabitEthernet 1/0/0.1] quit
    [*PE2] commit

  8. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS BE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy srbe
    [*PE1-tunnel-policy-srbe] tunnel select-seq sr-lsp load-balance-number 1 
    [*PE1-tunnel-policy-srbe] quit
    [*PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] tnl-policy srbe
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srbe
    [*PE2-tunnel-policy-srbe] tunnel select-seq sr-lsp load-balance-number 1 
    [*PE2-tunnel-policy-srbe] quit
    [*PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] tnl-policy srbe
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  9. Establish a BGP EVPN peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command to check whether the BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been successfully established. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
    
     BGP local router ID : 10.1.1.1            
     Local AS number : 100                                                                    
     Total number of peers : 1                 Peers in established state : 1                           
    
      Peer                             V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv        
      3.3.3.3                          4         100       43       44     0 00:34:03 Established        1 

  10. Configure the CEs to communicate with the PEs.

    # Configure CE1.

    [~CE1] interface GigabitEthernet 1/0/0.1
    [*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE1-GigabitEthernet1/0/0.1] ip address 172.16.1.1 24
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface GigabitEthernet 1/0/0.1
    [*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE2-GigabitEthernet1/0/0.1] ip address 172.16.1.2 24
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  11. Verify the configuration.

    Run the display bgp evpn all routing-table command on each PE. The command output shows EVPN routes sent from the remote PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100                                                  
    
     BGP Local router ID is 10.1.1.1                                        
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,                                                    
                   h - history,  i - internal, s - suppressed, S - Stale    
                   Origin : i - IGP, e - EGP, ? - incomplete                
    
    
     EVPN address family:                                                   
     Number of Mac Routes: 2                                                
     Route Distinguisher: 100:1                                             
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop   
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0   
     Route Distinguisher: 200:1                                             
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop   
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3   
    
    
     EVPN-Instance evrf1:                                                   
     Number of Mac Routes: 2                                                
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop  
     Number of Mac Routes: 2                                                
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop   
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0   
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3   
    
     EVPN address family:                                                   
     Number of Inclusive Multicast Routes: 2                                
     Route Distinguisher: 100:1                                             
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop   
     *>    0:32:1.1.1.1                                           127.0.0.1 
     Route Distinguisher: 200:1                                             
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop   
     *>i   0:32:3.3.3.3                                           3.3.3.3   
    
    
     EVPN-Instance evrf1:                                                   
     Number of Inclusive Multicast Routes: 2                                
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop   
     *>    0:32:1.1.1.1                                           127.0.0.1 
     *>i   0:32:3.3.3.3                                           3.3.3.3

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 command on PE1 to check details about the specified MAC route.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0  
    
     BGP local router ID : 10.1.1.1                                         
     Local AS number : 100                                                  
     Total routes of Route Distinguisher(200:1): 1                          
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:  
     Label information (Received/Applied): 48090/NULL                       
     From: 3.3.3.3 (10.2.1.2)                                               
     Route Duration: 0d00h03m20s                                            
     Relay IP Nexthop: 10.1.1.2                                             
     Relay IP Out-Interface: GigabitEthernet2/0/0                                  
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                              
     Original nexthop: 3.3.3.3                                              
     Qos information : 0x0                                                  
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>                                              
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20   
     Route Type: 2 (MAC Advertisement Route)                                
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000   
     Not advertised to any peer yet                                         
    
    
    
     EVPN-Instance evrf1:                                                   
     Number of Mac Routes: 1                                                
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:  
     Route Distinguisher: 200:1                                             
     Remote-Cross route                                                     
     Label information (Received/Applied): 48090/NULL                       
     From: 3.3.3.3 (10.2.1.2)                                               
     Route Duration: 0d00h03m21s                                            
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                              
     Original nexthop: 3.3.3.3                                              
     Qos information : 0x0                                                  
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>                                              
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20        
     Route Type: 2 (MAC Advertisement Route)                                
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000      
     Not advertised to any peer yet

    Run the display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3 command on PE1 to check details about the specified inclusive multicast route.

    [~PE1] display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3
    
     BGP local router ID : 10.1.1.1       
     Local AS number : 100                
     Total routes of Route Distinguisher(200:1): 1                                       
     BGP routing table entry information of 0:32:3.3.3.3:                                
     Label information (Received/Applied): 48123/NULL                                    
     From: 3.3.3.3 (3.3.3.3)    
     Route Duration: 0d01h33m44s
     Relay IP Nexthop: 10.1.1.2           
     Relay IP Out-Interface: GigabitEthernet2/0/0
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                                           
     Original nexthop: 3.3.3.3
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20       
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3   
     Route Type: 3 (Inclusive Multicast Route)                                           
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                                        
     Not advertised to any peer yet 
    
    
    
     EVPN-Instance evrf1:       
     Number of Inclusive Multicast Routes: 1                                             
     BGP routing table entry information of 0:32:3.3.3.3:                                
     Route Distinguisher: 200:1 
     Remote-Cross route         
     Label information (Received/Applied): 48123/NULL                                    
     From: 3.3.3.3 (3.3.3.3)    
     Route Duration: 0d01h33m44s
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                                           
     Original nexthop: 3.3.3.3  
     Qos information : 0x0      
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3   
     Route Type: 3 (Inclusive Multicast Route)                                           
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                                        
     Not advertised to any peer yet

    Run the ping command on the CEs. The command output shows that the CEs belonging to the same VPN instance can ping each other. For example:

    [~CE1] ping 172.16.1.2                                     
      PING 172.16.1.2: 56  data bytes, press CTRL_C to break                             
        Reply from 172.16.1.2: bytes=56 Sequence=1 ttl=255 time=7 ms                     
        Reply from 172.16.1.2: bytes=56 Sequence=2 ttl=255 time=10 ms                    
        Reply from 172.16.1.2: bytes=56 Sequence=3 ttl=255 time=6 ms                     
        Reply from 172.16.1.2: bytes=56 Sequence=4 ttl=255 time=2 ms                     
        Reply from 172.16.1.2: bytes=56 Sequence=5 ttl=255 time=5 ms                     
    
      --- 172.16.1.2 ping statistics ---  
        5 packet(s) transmitted 
        5 packet(s) received    
        0.00% packet loss       
        round-trip min/avg/max = 2/6/10 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 100:1
     tnl-policy srbe
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700   
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
    #
    tunnel-policy srbe
     tunnel select-seq sr-lsp load-balance-number 1
    #
    evpn source-address 1.1.1.1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 200:1
     tnl-policy srbe
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720   
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
    #
    tunnel-policy srbe
     tunnel select-seq sr-lsp load-balance-number 1
    #
    evpn source-address 3.3.3.3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.1 255.255.255.0
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.2 255.255.255.0
    #
    return

Example for Configuring EVPN VPLS over SR-MPLS BE (Common EVPN Instance)

This section provides an example for configuring an SR-MPLS BE tunnel to carry EVPN VPLS services.

Networking Requirements

To allow different sites to communicate over the backbone network shown in Figure 1-2658, configure EVPN to achieve Layer 2 service transmission. If the sites belong to the same subnet, create an EVPN instance on each PE to store EVPN routes and implement Layer 2 forwarding based on matching MAC addresses. In this example, an SR-MPLS BE tunnel needs to be used to transmit services between the PEs.

Figure 1-2658 EVPN VPLS over SR-MPLS BE networking

Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and GE 1/0/0.1, respectively.


Precautions

During the configuration process, note the following:

  • Using the local loopback address of each PE as the source address of the PE is recommended.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other.

  3. Configure an SR-MPLS BE tunnel on the backbone network.

  4. Configure an EVPN instance on each PE.

  5. Configure an EVPN source address on each PE.

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to CEs.

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS BE tunnel.

  8. Establish a BGP EVPN peer relationship between the PEs.

  9. Configure the CEs to communicate with the PEs.

Data Preparation

To complete the configuration, you need the following data:

  • EVPN instance name: evrf1

  • RDs (100:1 and 200:1) and RT (1:1) of the EVPN instance evrf1 on PE1 and PE2

Procedure

  1. Configure addresses for interfaces connecting the PEs and the P according to Figure 1-2658.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check whether an IS-IS neighbor relationship has been established between PE1 and the P and between PE2 and the P. If the Up state is displayed in the command output, the neighbor relationship has been successfully established. You can run the display ip routing-table command to check that the PEs have learned the route to each other's loopback 1 interface.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
    
      System Id     Interface          Circuit Id        State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0            1111.1111.2222.01  Up   8s       L2       64 
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls] commit
    [~P-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

  4. Configure an SR-MPLS BE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure the P.

    [~P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # After completing the configurations, run the display tunnel-info all command on each PE. The command output shows that SR LSPs have been established. The following example uses the command output on PE1.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status              
    ---------------------------------------------------------------------------------------- 
    0x000000002900000004 srbe-lsp            2.2.2.2                                 UP             
    0x000000002900000005 srbe-lsp            3.3.3.3                                 UP 

    # Run the ping command on PE1 to check the SR LSP connectivity. For example:

    [~PE1] ping lsp segment-routing ip 3.3.3.3 32 version draft2                                                                         
      LSP PING FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.3/32 : 100  data bytes, press CTRL_C to break      
        Reply from 3.3.3.3: bytes=100 Sequence=1 time=6 ms                               
        Reply from 3.3.3.3: bytes=100 Sequence=2 time=3 ms                                 
        Reply from 3.3.3.3: bytes=100 Sequence=3 time=3 ms                                      
        Reply from 3.3.3.3: bytes=100 Sequence=4 time=3 ms                                        
        Reply from 3.3.3.3: bytes=100 Sequence=5 time=3 ms                                             
    
      --- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.3/32 ping statistics ---                 
        5 packet(s) transmitted                                                              
        5 packet(s) received   
        0.00% packet loss 
        round-trip min/avg/max = 3/3/6 ms 

  5. Configure an EVPN instance on each PE.

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1
    [*PE1-evpn-instance-evrf1] route-distinguisher 100:1
    [*PE1-evpn-instance-evrf1] vpn-target 1:1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn vpn-instance evrf1
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 1:1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  6. Configure an EVPN source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.1
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.3
    [*PE2] commit

  7. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

    # Configure PE1.

    [~PE1] interface GigabitEthernet 1/0/0
    [*PE1-Gigabitethernet1/0/0] undo shutdown
    [*PE1-Gigabitethernet1/0/0] quit
    [*PE1] interface GigabitEthernet 1/0/0.1
    [*PE1-GigabitEthernet 1/0/0.1] vlan-type dot1q 10
    [*PE1-GigabitEthernet 1/0/0.1] evpn binding vpn-instance evrf1
    [*PE1-GigabitEthernet 1/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface GigabitEthernet 1/0/0
    [*PE2-Gigabitethernet1/0/0] undo shutdown
    [*PE2-Gigabitethernet1/0/0] quit
    [*PE2] interface GigabitEthernet 1/0/0.1
    [*PE2-GigabitEthernet 1/0/0.1] vlan-type dot1q 10
    [*PE2-GigabitEthernet 1/0/0.1] evpn binding vpn-instance evrf1
    [*PE2-GigabitEthernet 1/0/0.1] quit
    [*PE2] commit

  8. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS BE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy srbe
    [*PE1-tunnel-policy-srbe] tunnel select-seq sr-lsp load-balance-number 1 
    [*PE1-tunnel-policy-srbe] quit
    [*PE1] evpn vpn-instance evrf1
    [*PE1-evpn-instance-evrf1] tnl-policy srbe
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srbe
    [*PE2-tunnel-policy-srbe] tunnel select-seq sr-lsp load-balance-number 1 
    [*PE2-tunnel-policy-srbe] quit
    [*PE2] evpn vpn-instance evrf1
    [*PE2-evpn-instance-evrf1] tnl-policy srbe
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  9. Establish a BGP EVPN peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command to check whether the BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been successfully established. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
    
     BGP local router ID : 10.1.1.1                                                           
     Local AS number : 100                                                                    
     Total number of peers : 1                 Peers in established state : 1                           
    
      Peer                             V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv        
      3.3.3.3                          4         100       43       44     0 00:34:03 Established        1 

  10. Configure the CEs to communicate with the PEs.

    # Configure CE1.

    [~CE1] interface GigabitEthernet 1/0/0.1
    [*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE1-GigabitEthernet1/0/0.1] ip address 172.16.1.1 24
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface GigabitEthernet 1/0/0.1
    [*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE2-GigabitEthernet1/0/0.1] ip address 172.16.1.2 24
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  11. Verify the configuration.

    Run the display bgp evpn all routing-table command on each PE. The command output shows EVPN routes sent from the remote PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100     
    
     BGP Local router ID is 10.1.1.1       
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,               
                   h - history,  i - internal, s - suppressed, S - Stale  
                   Origin : i - IGP, e - EGP, ? - incomplete        
    
    
     EVPN address family:      
     Number of Mac Routes: 2   
     Route Distinguisher: 100:1            
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop 
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0 
     Route Distinguisher: 200:1            
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop 
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3 
    
    
     EVPN-Instance evrf1:      
     Number of Mac Routes: 2   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop   
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0 
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3 
    
     EVPN address family:      
     Number of Inclusive Multicast Routes: 2                        
     Route Distinguisher: 100:1            
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop 
     *>    0:32:1.1.1.1                                           127.0.0.1                        
     Route Distinguisher: 200:1            
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop 
     *>i   0:32:3.3.3.3                                           3.3.3.3 
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 2                        
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop 
     *>    0:32:1.1.1.1                                           127.0.0.1                        
     *>i   0:32:3.3.3.3                                           3.3.3.3 

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 command on PE1 to check details about the specified MAC route.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 
    
     BGP local router ID : 10.1.1.1        
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                  
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                         
     Label information (Received/Applied): 48123/NULL               
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m32s           
     Relay IP Nexthop: 10.1.1.2            
     Relay IP Out-Interface: Ethernet3/0/0 
     Relay Tunnel Out-Interface: Ethernet3/0/0                      
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0> 
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20    
     Route Type: 2 (MAC Advertisement Route)                        
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000    
     Not advertised to any peer yet        
    
    
    
     EVPN-Instance evrf1:  
     Number of Mac Routes: 1   
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                         
     Route Distinguisher: 200:1            
     Remote-Cross route        
     Label information (Received/Applied): 48123/NULL               
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m31s           
     Relay Tunnel Out-Interface: Ethernet3/0/0                      
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20    
     Route Type: 2 (MAC Advertisement Route)                        
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000      
     Not advertised to any peer yet 

    Run the display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3 command on PE1 to check details about the specified inclusive multicast route.

    [~PE1] display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3
    
     BGP local router ID : 10.1.1.1        
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                  
     BGP routing table entry information of 0:32:3.3.3.3:           
     Label information (Received/Applied): 48124/NULL               
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h02m21s           
     Relay IP Nexthop: 10.1.1.2            
     Relay IP Out-Interface: GigabitEthernet2/0/0 
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                      
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20 
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48124), Tunnel Identifier:3.3.3.3 
     Route Type: 3 (Inclusive Multicast Route)                      
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                   
     Not advertised to any peer yet        
    
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 1                        
     BGP routing table entry information of 0:32:3.3.3.3:           
     Route Distinguisher: 200:1            
     Remote-Cross route        
     Label information (Received/Applied): 48124/NULL               
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h02m21s           
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                      
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20   
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48124), Tunnel Identifier:3.3.3.3
     Route Type: 3 (Inclusive Multicast Route)                      
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                   
     Not advertised to any peer yet

    Run the ping command on the CEs. The command output shows that the CEs belonging to the same VPN instance can ping each other. For example:

    [~CE1] ping 172.16.1.2                                     
      PING 172.16.1.2: 56  data bytes, press CTRL_C to break                                   
        Reply from 172.16.1.2: bytes=56 Sequence=1 ttl=255 time=7 ms 
        Reply from 172.16.1.2: bytes=56 Sequence=2 ttl=255 time=10 ms 
        Reply from 172.16.1.2: bytes=56 Sequence=3 ttl=255 time=6 ms                           
        Reply from 172.16.1.2: bytes=56 Sequence=4 ttl=255 time=2 ms                           
        Reply from 172.16.1.2: bytes=56 Sequence=5 ttl=255 time=5 ms                           
    
      --- 172.16.1.2 ping statistics ---              
        5 packet(s) transmitted 
        5 packet(s) received    
        0.00% packet loss       
        round-trip min/avg/max = 2/6/10 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1
     route-distinguisher 100:1
     tnl-policy srbe
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     evpn binding vpn-instance evrf1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700   
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
    #
    tunnel-policy srbe
     tunnel select-seq sr-lsp load-balance-number 1
    #
    evpn source-address 1.1.1.1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1
     route-distinguisher 200:1
     tnl-policy srbe
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     evpn binding vpn-instance evrf1
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720   
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
    #
    tunnel-policy srbe
     tunnel select-seq sr-lsp load-balance-number 1
    #
    evpn source-address 3.3.3.3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.1 255.255.255.0
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.2 255.255.255.0
    #
    return

Example for Configuring Non-Labeled Public BGP Routes to Recurse to an SR-MPLS BE Tunnel

Non-labeled public BGP routes are configured to recurse to an SR-MPLS BE tunnel, so that public network BGP traffic can be transmitted along the SR-MPLS BE tunnel.

Networking Requirements

If an Internet user sends packets to a carrier network that performs IP forwarding to access the Internet, core carrier devices on a forwarding path must learn many Internet routes. This imposes a heavy load on the core carrier devices and affects the performance of these devices. To tackle the problems, a user access device can be configured to recurse non-labeled public network BGP or static routes to a Segment Routing (SR) tunnel. User packets travel through the SR tunnel to access the Internet. The recursion to the SR tunnel prevents the problems induced by insufficient performance, heavy burdens, and service transmission on the core devices on the carrier network.

In Figure 1-2659, non-labeled public BGP routes are configured to recurse to an SR-MPLS BE tunnel.

Figure 1-2659 Non-labeled public BGP route recursion to an SR-MPLS BE tunnel

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Precautions

During the configuration process, note the following:

When establishing a peer, if the specified IP address of the peer is a loopback interface address or a sub-interface address, you need to run the peer connect-interface command on the two ends of the peer to ensure that the two ends are correctly connected.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network for the PEs to communicate.

  2. Enable MPLS on the backbone network, configure SR, and establish SR LSPs.

  3. Establish an IBGP peer relationship between the PEs for them to exchange routing information.

  4. Enable PEs to recurse non-labeled public BGP routes to the SR-MPLS BE tunnel.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • SRGB ranges on the PEs and P

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.9 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 172.17.1.2 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.17.1.1 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Configure an IGP on the backbone network for the PEs to communicate. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-1
    [*P-isis-1] network-entity 10.0000.0000.0002.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.9
    [*P] mpls
    [*P-mpls] commit
    [~P-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

  4. Configure Segment Routing on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] tunnel-prefer segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure the P.

    [~P] segment-routing
    [*P-segment-routing] tunnel-prefer segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid index 20
    [*P-LoopBack1] quit
    [*P] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] tunnel-prefer segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

  5. Establish an IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] commit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] commit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer command on the PEs to check whether the BGP peer relationship has been established. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0

  6. Enable PEs to recurse non-labeled public BGP routes to the SR-MPLS BE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 1
    [*PE1-tunnel-policy-p1] quit
    [*PE1] tunnel-selector s1 permit node 10
    [*PE1-tunnel-selector] apply tunnel-policy p1
    [*PE1-tunnel-selector] quit
    [*PE1] bgp 100
    [*PE1-bgp] unicast-route recursive-lookup tunnel tunnel-selector s1
    [*PE1-bgp] commit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 1
    [*PE2-tunnel-policy-p1] quit
    [*PE2] tunnel-selector s1 permit node 10
    [*PE2-tunnel-selector] apply tunnel-policy p1
    [*PE2-tunnel-selector] quit
    [*PE2] bgp 100
    [*PE2-bgp] unicast-route recursive-lookup tunnel tunnel-selector s1
    [*PE2-bgp] commit
    [~PE2-bgp] quit

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    tunnel-selector s1 permit node 10
     apply tunnel-policy p1
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      unicast-route recursive-lookup tunnel tunnel-selector s1
      peer 3.3.3.9 enable
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    tunnel-selector s1 permit node 10
     apply tunnel-policy p1
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      unicast-route recursive-lookup tunnel tunnel-selector s1
      peer 1.1.1.9 enable
    #
    tunnel-policy p1
     tunnel select-seq sr-lsp load-balance-number 1
    #
    return

Example for Configuring IS-IS SR to Communicate with LDP

This section provides an example for configuring IS-IS SR to communicate with LDP so that devices in the SR domain can communicate with devices in the LDP domain using MPLS forwarding techniques.

Networking Requirements

As shown in Figure 1-2660, an SR domain is created between PE1 and P, and an LDP domain lies between P and PE2. PE1 and PE2 need to communicate with each other.

Figure 1-2660 Communication between SR and LDP

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Precautions

When configuring IS-IS SR to communicate with LDP, note that a device in the SR domain must be able to map LDP prefix information to SR SIDs.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS on the backbone network. Configure segment routing to establish an SR LSP from PE1 to the P. Configure LDP to establish an LDP LSP from the P to PE2.

  3. Configure the mapping server function on the P to map LDP prefix information to SR SIDs.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • SRGB ranges on the PE1 and P

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.9 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Configure an IGP protocol on the MPLS backbone network to implement connectivity between the PEs.

    IS-IS is used as an IGP protocol in this example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-1
    [*P-isis-1] cost-style wide
    [*P-isis-1] network-entity 10.0000.0000.0002.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  3. Configure the basic MPLS functions on the MPLS backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.9
    [*P] mpls
    [*P-mpls] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] mpls
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] mpls
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  4. Configure segment routing between PE1 and the P on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] tunnel-prefer segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure the P.

    [~P] segment-routing
    [*P-segment-routing] tunnel-prefer segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid index 20
    [*P-LoopBack1] quit
    [*P] commit

  5. Establish an LDP LSP between PE2 and the P.

    # Configure the P.

    [~P] mpls ldp
    [*P-mpls-ldp] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] mpls ldp
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] mpls ldp
    [*PE2-mpls-ldp] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] mpls ldp
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  6. Configure the mapping server function on the P, and configure the P to allow SR and LDP to communicate with each other.

    # Configure the P.

    [~P] segment-routing
    [~P-segment-routing] mapping-server prefix-sid-mapping 3.3.3.9 32 22
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] segment-routing mapping-server send
    [*P-isis-1] quit
    [*P] mpls
    [*P-mpls] lsp-trigger segment-routing-interworking best-effort host
    [*P-mpls] commit
    [~P-mpls] quit

  7. Verify the configuration.

    Run the display segment-routing prefix mpls forwarding command on an SR device to check information about the label forwarding table for segment routing.

    # In the following, the command output on the P is used.

    [~P] display segment-routing prefix mpls forwarding
                             Segment Routing Prefix MPLS Forwarding Information
                       --------------------------------------------------------------
                       Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
    
    Prefix          Label      OutLabel   Interface       NextHop        Role   MPLSMtu   Mtu   State 
    -----------------------------------------------------------------------------------------------------
    3.3.3.9/32      160022     NULL       Mapping LDP     ---            E      ---       ---   Active 
    
    Total information(s): 1

    The command output shows that the forwarding entry for the route to 3.3.3.9/32 exists and has its outbound interface is the mapping LDP, which indicates that the P has successfully stitched the SR LSP to the MPLS LDP LSP.

    # Configure the PEs to ping each other. For example, PE1 pings PE2 at 3.3.3.9.

    [~PE1] ping lsp segment-routing ip 3.3.3.9 32 version draft2 remote 3.3.3.9
     LSP PING FEC: Nil FEC : 100  data bytes, press CTRL_C to break
        Reply from 3.3.3.9: bytes=100 Sequence=1 time=72 ms
        Reply from 3.3.3.9: bytes=100 Sequence=2 time=34 ms
        Reply from 3.3.3.9: bytes=100 Sequence=3 time=50 ms
        Reply from 3.3.3.9: bytes=100 Sequence=4 time=50 ms
        Reply from 3.3.3.9: bytes=100 Sequence=5 time=34 ms
      --- FEC: Nil FEC ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.9
    #
    mpls
    #
    segment-routing
     tunnel-prefer segment-routing
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 172.16.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #
    return
    
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
     lsp-trigger segment-routing-interworking best-effort host
    #
    mpls ldp
    #
    segment-routing
     tunnel-prefer segment-routing
     mapping-server prefix-sid-mapping 3.3.3.9 32 22 
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     segment-routing mapping-server send 
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 172.16.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 172.17.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
    #
    mpls ldp
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 172.17.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1
    #
    return

Example for Configuring IS-IS Anycast FRR

IS-IS anycast FRR can be configured to enhance the reliability of an SR network.

Networking Requirements

As shown in Figure 1-2661, CE1 can be reached through either PE2 and PE3. To enable PE2 and PE3 to protect each other, configure IS-IS anycast FRR. This improves network reliability.

Figure 1-2661 IS-IS anycast FRR protection networking

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Enable IS-IS on the backbone network to ensure that PEs interwork with each other.

  2. Enable MPLS on the backbone network, configure SR, and establish SR LSPs.

  3. Enable TI-LFA FRR on PE1 and configure the delayed switchback function.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • SRGB ranges on the PEs

Procedure

  1. Assign an IP address to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 2.2.2.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure PE3.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE3
    [*HUAWEI] commit
    [~PE3] interface loopback 0
    [*PE3-LoopBack0] ip address 4.4.4.9 32
    [*PE3-LoopBack0] quit
    [*PE3] interface loopback 1
    [*PE3-LoopBack1] ip address 2.2.2.9 32
    [*PE3-LoopBack1] quit
    [*PE3] interface gigabitethernet1/0/0
    [*PE3-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*PE3-GigabitEthernet1/0/0] quit
    [*PE3] commit

  2. Configure an IGP on the backbone network to enable the PEs to communicate. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0002.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure PE3.

    [~PE3] isis 1
    [*PE3-isis-1] is-level level-1
    [*PE3-isis-1] network-entity 10.0000.0000.0004.00
    [*PE3-isis-1] quit
    [*PE3] interface loopback 1
    [*PE3-LoopBack1] isis enable 1
    [*PE3-LoopBack1] quit
    [*PE3] interface gigabitethernet1/0/0
    [*PE3-GigabitEthernet1/0/0] isis enable 1
    [*PE3-GigabitEthernet1/0/0] quit
    [*PE3] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 2.2.2.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure PE3.

    [~PE3] mpls lsr-id 4.4.4.9
    [*PE3] mpls
    [*PE3-mpls] commit
    [~PE3-mpls] quit

  4. Configure SR, enable TI-LFA FRR, and configure microloop avoidance in traffic switchback scenarios on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] tunnel-prefer segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 160000 161000

    The SRGB range various according to a live network. Set the range as needed. The SRGB setting here is an example.

    [*PE1-isis-1] frr
    [*PE1-isis-1-frr] loop-free-alternate level-1
    [*PE1-isis-1-frr] ti-lfa level-1
    [*PE1-isis-1-frr] quit
    [*PE1-isis-1] avoid-microloop segment-routing
    [*PE1-isis-1] avoid-microloop segment-routing rib-update-delay 6000
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] tunnel-prefer segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 20
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure PE3.

    [~PE3] segment-routing
    [*PE3-segment-routing] tunnel-prefer segment-routing
    [*PE3-segment-routing] quit
    [*PE3] isis 1
    [*PE3-isis-1] cost-style wide
    [*PE3-isis-1] segment-routing mpls
    [*PE3-isis-1] segment-routing global-block 160000 161000

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE3-isis-1] quit
    [*PE3] interface loopback 1
    [*PE3-LoopBack1] isis prefix-sid index 20
    [*PE3-LoopBack1] quit
    [*PE3] commit

  5. Checking the Configurations

    Run the display segment-routing prefix mpls forwarding verbose command on PE1 to check the SR label forwarding table. The command output shows FRR backup entry information.
    [~PE1] display segment-routing prefix mpls forwarding ip-prefix 2.2.2.9 32 verbose
    
                       Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    -----------------------------------------------------------------------------------------------------------------
    2.2.2.9/32         160020     3          GE1/0/0           172.16.1.2       I&T   ---       1500    Active         
    Protocol : ISIS          SubProtocol : Level-1       Process ID : 1         
    Cost     : 10            Weight      : 0             UpdateTime : 2018-12-11 06:46:33.920       
    BFD State: --            Favor       : Y
    Label Stack (Top -> Bottom): { 3 }
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    -----------------------------------------------------------------------------------------------------------------
    2.2.2.9/32         160020     3          GE2/0/0           172.18.1.2       I&T   ---       1500    Active          
    Protocol : ISIS          SubProtocol : Level-1       Process ID : 1         
    Cost     : 10            Weight      : 0             UpdateTime : 2018-12-11 06:47:21.478       
    BFD State: --            Favor       : Y
    Label Stack (Top -> Bottom): { 3 }
    
    Total information(s): 2

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     avoid-microloop segment-routing
     avoid-microloop segment-routing rib-update-delay 6000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE3 configuration file

    #
    sysname PE3
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     tunnel-prefer segment-routing
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 160000 161000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack0
     ip address 4.4.4.9 255.255.255.255
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return

Example for Configuring SBFD to Monitor SR-MPLS BE Tunnels

This section provides an example for configuring SBFD to monitor SR-MPLS BE tunnels, which improves network reliability.

Networking Requirements

As shown in Figure 1-2662, SR-MPLS BE tunnels are established between PEs on the public network. To improve network reliability, configure SBFD. SBFD can be used to monitor the SR-MPLS BE tunnels. If the primary tunnel fails, applications such as VPN FRR are instructed to quickly switch traffic, minimizing the impact on services.

Figure 1-2662 SBFD for SR-MPLS BE tunnel

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Enable IS-IS on the backbone network to ensure that PEs interwork with each other.

  2. Configure MPLS and Segment Routing on the backbone network to establish SR LSPs. Enable topology independent-loop free alternate (TI-LFA) FRR.

  3. Configure SBFD to establish sessions between PEs to monitor SR-MPLS BE tunnels.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • SRGB ranges on the PEs and P

Procedure

  1. Assign an IP address to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network to enable the PEs to communicate. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on the backbone network and enable TI-LFA FRR.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 160000 161000

    The SRGB range various according to a live network. Set a range as needed. The SRGB setting here is an example.

    [*PE1-isis-1] frr
    [*PE1-isis-1-frr] loop-free-alternate level-1
    [*PE1-isis-1-frr] ti-lfa level-1
    [*PE1-isis-1-frr] quit
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 160000 161000

    The SRGB range various according to a live network. Set a range as needed. The SRGB setting here is an example.

    [*P1-isis-1] frr
    [*P1-isis-1-frr] loop-free-alternate level-1
    [*P1-isis-1-frr] ti-lfa level-1
    [*P1-isis-1-frr] quit
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 160000 161000

    The SRGB range various according to a live network. Set a range as needed. The SRGB setting here is an example.

    [*PE2-isis-1] frr
    [*PE2-isis-1-frr] loop-free-alternate level-1
    [*PE2-isis-1-frr] ti-lfa level-1
    [*PE2-isis-1-frr] quit
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 160000 161000

    The SRGB range various according to a live network. Set a range as needed. The SRGB setting here is an example.

    [*P2-isis-1] frr
    [*P2-isis-1-frr] loop-free-alternate level-1
    [*P2-isis-1-frr] ti-lfa level-1
    [*P2-isis-1-frr] quit
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] quit
    [*P2] commit

    # After completing the configuration, run the display tunnel-info all command on a PE. The SR LSP has been established. In the following example, the command output on PE1 is used.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status
    ----------------------------------------------------------------------------------------
    0x000000002900000003 srbe-lsp            4.4.4.9                                 UP  
    0x000000002900000004 srbe-lsp            2.2.2.9                                 UP  
    0x000000002900000005 srbe-lsp            3.3.3.9                                 UP 

    # Use ping to monitor SR LSP connectivity on PE1.

    [~PE1] ping lsp segment-routing ip 3.3.3.9 32 version draft2
      LSP PING FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 : 100  data bytes, press CTRL_C to break
        Reply from 3.3.3.9: bytes=100 Sequence=1 time=12 ms
        Reply from 3.3.3.9: bytes=100 Sequence=2 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=3 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=4 time=5 ms
        Reply from 3.3.3.9: bytes=100 Sequence=5 time=5 ms
    
      --- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 5/6/12 ms

  5. Configure SBFD on PEs.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] seamless-bfd enable mode tunnel
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] commit
    [~PE2-sbfd] quit

  6. Verify the configuration.

    Run the display segment-routing seamless-bfd tunnel session prefix ip-address command on a PE. The command output shows information about SBFD sessions that monitor SR tunnels.

    In the following example, the command output on PE1 is used.

    [~PE1] display segment-routing seamless-bfd tunnel session prefix 3.3.3.9 32
    Seamless BFD Information for SR Tunnel
    Total Tunnel Number: 1
    -------------------------------------------------------------------
    Prefix               Discriminator                  State          
    -------------------------------------------------------------------
    3.3.3.9/32           16385                          Up             
    -------------------------------------------------------------------

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    bfd
    #
    sbfd
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     seamless-bfd enable mode tunnel
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.16.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.19.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 160000 161000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.18.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.19.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return

Example for Configuring TI-LFA FRR for an SR-MPLS BE Tunnel

Topology-Independent Loop-Free Alternate (TI-LFA) FRR can be configured to enhance the reliability of an SR network.

Networking Requirements

On the network shown in Figure 1-2663, IS-IS is enabled, and the SR-MPLS BE function is configured. The cost of the link between Device C and Device D is 100, and the cost of other links is 10. The optimal path from Device A to Device F is Device A -> Device B -> Device E -> Device F. TI-LFA FRR can be configured on Device B to provide local protection, enabling traffic to be quickly switched to the backup path (Device A -> Device B -> Device C -> Device D -> Device E -> Device F) when the link between Device B and Device E fails.

Figure 1-2663 Configuring TI-LFA FRR for an SR-MPLS BE tunnel

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

In this example, TI-LFA FRR is configured on DeviceB to protect the link between DeviceB and DeviceE. On a live network, you are advised to configure TI-LFA FRR on all nodes in the SR domain.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the entire network to implement interworking between devices.

  2. Enable MPLS on the entire network, configure SR, and establish an SR-MPLS BE tunnel.

  3. Enable TI-LFA FRR and anti-microloop on Device B.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR ID of each device

  • SRGB range of each device

Procedure

  1. Configure interface IP addresses.

    # Configure Device A.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] ip address 1.1.1.9 32
    [*DeviceA-LoopBack1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    Repeat this step for the other devices. For configuration details, see Configuration Files in this section.

  2. Configure an IGP to implement interworking. IS-IS is used as an example.

    # Configure Device A.

    [~DeviceA] isis 1
    [*DeviceA-isis-1] is-level level-1
    [*DeviceA-isis-1] network-entity 10.0000.0000.0001.00
    [*DeviceA-isis-1] cost-style wide
    [*DeviceA-isis-1] quit
    [*DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] isis enable 1
    [*DeviceA-LoopBack1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] isis enable 1
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    Repeat this step for the other devices. For configuration details, see Configuration Files in this section.

    Set the cost of the link between DeviceC and DeviceD to 100 to simulate a special network scenario.

    # Configure DeviceC.

    [~DeviceC] interface gigabitethernet2/0/0
    [~DeviceC-GigabitEthernet2/0/0] isis cost 100
    [*DeviceC-GigabitEthernet2/0/0] quit
    [*DeviceC] commit

    # Configure DeviceD.

    [~DeviceD] interface gigabitethernet1/0/0
    [~DeviceD-GigabitEthernet1/0/0] isis cost 100
    [*DeviceD-GigabitEthernet1/0/0] quit
    [*DeviceD] commit

  3. Configure basic MPLS functions on the backbone network.

    Because MPLS is automatically enabled on the interface where IS-IS has been enabled, you can ignore MPLS configuration on such an interface.

    # Configure Device A.

    [~DeviceA] mpls lsr-id 1.1.1.9
    [*DeviceA] mpls
    [*DeviceA-mpls] commit
    [~DeviceA-mpls] quit

    # Configure Device B.

    [~DeviceB] mpls lsr-id 2.2.2.9
    [*DeviceB] mpls
    [*DeviceB-mpls] commit
    [~DeviceB-mpls] quit

    # Configure Device C.

    [~DeviceC] mpls lsr-id 3.3.3.9
    [*DeviceC] mpls
    [*DeviceC-mpls] commit
    [~DeviceC-mpls] quit

    # Configure Device D.

    [~DeviceD] mpls lsr-id 4.4.4.9
    [*DeviceD] mpls
    [*DeviceD-mpls] commit
    [~DeviceD-mpls] quit

    # Configure Device E.

    [~DeviceE] mpls lsr-id 5.5.5.9
    [*DeviceE] mpls
    [*DeviceE-mpls] commit
    [~DeviceE-mpls] quit

    # Configure Device F.

    [~DeviceF] mpls lsr-id 6.6.6.9
    [*DeviceF] mpls
    [*DeviceF-mpls] commit
    [~DeviceF-mpls] quit

  4. Configure SR on the backbone network and establish an SR-MPLS BE tunnel.

    # Configure Device A.

    [~DeviceA] segment-routing
    [*DeviceA-segment-routing] quit
    [*DeviceA] isis 1
    [*DeviceA-isis-1] segment-routing mpls
    [*DeviceA-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceA-isis-1] quit
    [*DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] isis prefix-sid index 10
    [*DeviceA-LoopBack1] quit
    [*DeviceA] commit

    # Configure Device B.

    [~DeviceB] segment-routing
    [*DeviceB-segment-routing] quit
    [*DeviceB] isis 1
    [*DeviceB-isis-1] segment-routing mpls
    [*DeviceB-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceB-isis-1] quit
    [*DeviceB] interface loopback 1
    [*DeviceB-LoopBack1] isis prefix-sid index 20
    [*DeviceB-LoopBack1] quit
    [*DeviceB] commit

    # Configure Device C.

    [~DeviceC] segment-routing
    [*DeviceC-segment-routing] quit
    [*DeviceC] isis 1
    [*DeviceC-isis-1] segment-routing mpls
    [*DeviceC-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceC-isis-1] quit
    [*DeviceC] interface loopback 1
    [*DeviceC-LoopBack1] isis prefix-sid index 30
    [*DeviceC-LoopBack1] quit
    [*DeviceC] commit

    # Configure Device D.

    [~DeviceD] segment-routing
    [*DeviceD-segment-routing] quit
    [*DeviceD] isis 1
    [*DeviceD-isis-1] segment-routing mpls
    [*DeviceD-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceD-isis-1] quit
    [*DeviceD] interface loopback 1
    [*DeviceD-LoopBack1] isis prefix-sid index 40
    [*DeviceD-LoopBack1] quit
    [*DeviceD] commit

    # Configure Device E.

    [~DeviceE] segment-routing
    [*DeviceE-segment-routing] quit
    [*DeviceE] isis 1
    [*DeviceE-isis-1] segment-routing mpls
    [*DeviceE-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceE-isis-1] quit
    [*DeviceE] interface loopback 1
    [*DeviceE-LoopBack1] isis prefix-sid index 50
    [*DeviceE-LoopBack1] quit
    [*DeviceE] commit

    # Configure Device F.

    [~DeviceF] segment-routing
    [*DeviceF-segment-routing] quit
    [*DeviceF] isis 1
    [*DeviceF-isis-1] segment-routing mpls
    [*DeviceF-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceF-isis-1] quit
    [*DeviceF] interface loopback 1
    [*DeviceF-LoopBack1] isis prefix-sid index 60
    [*DeviceF-LoopBack1] quit
    [*DeviceF] commit

    # After completing the configurations, run the display segment-routing prefix mpls forwarding command on each device. The command output shows that SR-MPLS BE label forwarding paths have been established. The following example uses the command output on Device A.

    [~DeviceA] display segment-routing prefix mpls forwarding
                       Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    -----------------------------------------------------------------------------------------------------------------
    1.1.1.9/32         16010      NULL       Loop1             127.0.0.1        E     ---       1500    Active          
    2.2.2.9/32         16020      3          GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    3.3.3.9/32         16030      16030      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    4.4.4.9/32         16040      16040      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    5.5.5.9/32         16050      16050      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    6.6.6.9/32         16060      16060      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    
    Total information(s): 6

  5. Configure TI-LFA FRR.

    # Configure Device B.

    [~DeviceB] isis 1
    [~DeviceB-isis-1] avoid-microloop frr-protected
    [*DeviceB-isis-1] avoid-microloop frr-protected rib-update-delay 5000
    [*DeviceB-isis-1] avoid-microloop segment-routing
    [*DeviceB-isis-1] avoid-microloop segment-routing rib-update-delay 10000
    [*DeviceB-isis-1] frr
    [*DeviceB-isis-1-frr] loop-free-alternate level-1
    [*DeviceB-isis-1-frr] ti-lfa level-1
    [*DeviceB-isis-1-frr] quit
    [*DeviceB-isis-1] quit
    [*DeviceB] commit

    After completing the configurations, run the display isis route [ level-1 | level-2 ] [ process-id ] [ verbose ] command on Device B. The command output shows IS-IS TI-LFA FRR backup entries.

    [~DeviceB] display isis route level-1 verbose
                             Route information for ISIS(1)
                             -----------------------------
    
                            ISIS(1) Level-1 Forwarding Table
                            --------------------------------
    
    
     IPV4 Dest  : 1.1.1.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 04:35:30
     NextHop    :                    Interface :               ExitIndex :
        10.1.1.1                           GE1/0/0                    0x0000000e
     Prefix-sid : 16010              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.1.1.1                           GE1/0/0                    3
    
     IPV4 Dest  : 2.2.2.9/32         Int. Cost : 0             Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : D/-/L/-
     Priority   : -                  Age       : 04:35:30
     NextHop    :                    Interface :               ExitIndex :
        Direct                             Loop1                      0x00000000
     Prefix-sid : 16020              Weight    : 0             Flags     : -/N/-/-/-/-/A/L
     SR NextHop :                    Interface :               OutLabel  :
        Direct                             Loop1                      -
    
     IPV4 Dest  : 3.3.3.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.2.1.2                           GE2/0/0                     0x0000000a
     TI-LFA:        
     Interface  : GE3/0/0                                                               
     NextHop    : 10.5.1.2           LsIndex    : 0x00000002   ProtectType: L
     Backup Label Stack (Top -> Bottom): {16040, 48141}
     Prefix-sid : 16030              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.2.1.2                           GE2/0/0                    3
     TI-LFA:        
     Interface  : GE3/0/0                                                               
     NextHop    : 10.5.1.2           LsIndex    : 0x00000002   ProtectType: L
     Backup Label Stack (Top -> Bottom): {16040, 48141}
    
     IPV4 Dest  : 4.4.4.9/32         Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: N
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16040              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    16040
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: N
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 5.5.5.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16050              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    3
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 6.6.6.9/32         Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16060              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    16060
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 10.1.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 04:35:30
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE1/0/0                    0x00000000
    
     IPV4 Dest  : 10.2.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 04:35:30
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE2/0/0                    0x00000000
    
     IPV4 Dest  : 10.3.1.0/24        Int. Cost : 110           Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.2.1.2                           GE2/0/0                    0x0000000a
     TI-LFA:        
     Interface  : GE3/0/0                                                               
     NextHop    : 10.5.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {}
    
     IPV4 Dest  : 10.4.1.0/24        Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 10.5.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 04:09:37
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE3/0/0                    0x00000000
    
     IPV4 Dest  : 10.6.1.0/24        Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 04:09:15
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                               
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut, 
                U-Up/Down Bit Set, LP-Local Prefix-Sid
         Protect Type: L-Link Protect, N-Node Protect

  6. Verify the configuration.

    # Run a tracert command on Device A to check the connectivity of the SR-MPLS BE tunnel to Device F. For example:

    [~DeviceA] tracert lsp segment-routing ip 6.6.6.9 32 version draft2
      LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 6.6.6.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.2/[16060 ]
      1      10.1.1.2           291 ms  Transit   10.5.1.2/[16060 ]
      2      10.5.1.2           10 ms   Transit   10.6.1.2/[3 ]
      3      6.6.6.9            11 ms   Egress

    # Run the shutdown command on GE 3/0/0 of Device B to simulate a link fault between Device B and Device E.

    [~DeviceB] interface gigabitethernet3/0/0
    [~DeviceB-GigabitEthernet3/0/0] shutdown
    [*DeviceB-GigabitEthernet3/0/0] quit
    [*DeviceB] commit

    # Run the tracert command on Device A again to check the connectivity of the SR-MPLS BE tunnel. For example:

    [~DeviceA] tracert lsp segment-routing ip 6.6.6.9 32 version draft2
      LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 6.6.6.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.2/[16060 ]
      1      10.1.1.2           3 ms    Transit   10.2.1.2/[16060 ]
      2      10.2.1.2           46 ms   Transit   10.3.1.2/[16060 ]
      3      10.3.1.2           33 ms   Transit   10.4.1.2/[16060 ]
      4      10.4.1.2           48 ms   Transit   10.6.1.2/[3 ]
      5      6.6.6.9            4 ms    Egress

    The preceding command output shows that the SR-MPLS BE tunnel has been switched to the TI-LFA FRR backup path.

Configuration Files

  • Device A configuration file

    #
    sysname DeviceA
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #
    return
  • Device B configuration file

    #
    sysname DeviceB
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     avoid-microloop frr-protected
     avoid-microloop frr-protected rib-update-delay 5000
     segment-routing mpls
     segment-routing global-block 16000 23999
     avoid-microloop segment-routing
     avoid-microloop segment-routing rib-update-delay 10000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.2.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.5.1.1 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • Device C configuration file

    #
    sysname DeviceC
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.2.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.3.1.1 255.255.255.0
     isis enable 1 
     isis cost 100
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #
    return
  • Device D configuration file

    #
    sysname DeviceD
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.3.1.2 255.255.255.0
     isis enable 1  
     isis cost 100
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return
  • Device E configuration file

    #
    sysname DeviceE
    #
    mpls lsr-id 5.5.5.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0005.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.6.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.5.1.2 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 5.5.5.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 50
    #
    return
  • Device F configuration file

    #
    sysname DeviceF
    #
    mpls lsr-id 6.6.6.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0006.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.6.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 6.6.6.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 60
    #
    return

Example for Configuring OSPF SR-MPLS BE over GRE

This section provides an example for configuring OSPF SR-MPLS BE over GRE, which involves enabling OSPF on each device, specifying network segments in different areas, and configuring GRE.

Networking Requirements

On the network shown in Figure 1-2664, enable OSPF and configure SR-MPLS BE. In addition, set the cost of the GRE tunnel between DeviceA and DeviceB to 100 and that of the other links to 10. The optimal path from DeviceA to DeviceD is DeviceA → DeviceD. Configure TI-LFA FRR on DeviceA to implement local protection. In this way, if the link between DeviceA and DeviceD fails, traffic can be rapidly switched to the backup path DeviceA → DeviceB → DeviceC → DeviceD. The traffic between DeviceA and DeviceB is forwarded over a GRE tunnel, which is bound to the link DeviceA → DeviceE → DeviceF → DeviceB.

Figure 1-2664 Configuring OSPF SR-MPLS BE over GRE

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Configuration Notes

None

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure OSPF on the entire network to implement interworking between devices.
  2. Configure SR and establish an SR-MPLS BE tunnel.
  3. Establish a GRE tunnel between DeviceA and DeviceB and bind the tunnel to another area.
  4. Set costs for links to ensure that the link DeviceA → DeviceD is preferentially used for traffic forwarding.
  5. Configure TI-LFA FRR.

Data Preparation

To complete the configuration, you need the following data:

  • DeviceA: router ID 1.1.1.1, OSPF process ID 1, area 0's network segment 10.2.1.0/24, and area 1's network segments 10.1.1.0/24 and 10.7.1.0/24
  • DeviceB: router ID 2.2.2.2, OSPF process ID 1, area 0's network segment 10.4.1.0/24, and area 1's network segments 10.6.1.0/24 and 10.7.1.0/24
  • DeviceC: router ID 3.3.3.3, OSPF process ID 1, and area 1's network segments 10.6.1.0/24 and 10.5.1.0/24
  • DeviceD: router ID 4.4.4.4, OSPF process ID 1, and area 1's network segments 10.1.1.0/24 and 10.5.1.0/24
  • DeviceE: router ID 5.5.5.5, OSPF process ID 1, and area 0's network segments 10.2.1.0/24 and 10.3.1.0/24
  • DeviceF: router ID 6.6.6.6, OSPF process ID 1, and area 0's network segments 10.3.1.0/24 and 10.4.1.0/24

Procedure

  1. Configure interface IP addresses.

    # Configure DeviceA.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] interface loopback 0
    [*DeviceA-LoopBack0] ip address 2.2.2.9 32
    [*DeviceA-LoopBack0] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] ip address 10.2.1.1 24
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit
    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] ip address 10.1.1.2 255.255.255.0
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit

    The configurations of other devices are similar to the configuration of DeviceA. For configuration details, see the configuration files.

  2. Configure OSPF to implement interworking.

    # Configure DeviceA.

    [~DeviceA] ospf 1
    [*DeviceA-ospf-1] router-id 1.1.1.1
    [*DeviceA-ospf-1] area 0.0.0.0
    [*DeviceA-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
    [*DeviceA-ospf-1-area-0.0.0.0] area 0.0.0.1
    [*DeviceA-ospf-1-area-0.0.0.1] network 10.1.1.0 0.0.0.255 
    [*DeviceA-ospf-1-area-0.0.0.1] network 10.7.1.0 0.0.0.255 
    [*DeviceA-ospf-1-area-0.0.0.1] quit
    [*DeviceA-ospf-1] quit
    [*DeviceA] commit

    The configurations of other devices are similar to the configuration of DeviceA. For configuration details, see the configuration files.

  3. Configure SR and establish an SR-MPLS BE tunnel.

    # Configure DeviceA.

    [~DeviceA] segment-routing 
    [*DeviceA-segment-routing] quit
    [*DeviceA] ospf 1
    [*DeviceA-ospf-1] segment-routing mpls
    [*DeviceA-ospf-1] segment-routing global-block 16000 16999 
    [*DeviceA-ospf-1] quit 
    [*DeviceA] interface LoopBack0
    [*DeviceA-LoopBack0] ospf enable 1 area 0.0.0.1
    [*DeviceA-LoopBack0] ospf prefix-sid index 100
    [*DeviceA-LoopBack0] quit
    [*DeviceA] commit

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    The configurations of DeviceB, DeviceC, and DeviceD are similar to the configuration of DeviceA. For configuration details, see the configuration files.

    # After completing the configuration, run the display segment-routing prefix mpls forwarding command on each device. The command output shows that SR-MPLS BE LSPs have been established. DeviceA is used as an example.

    [~DeviceA] display segment-routing prefix mpls forwarding
                     Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
     
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    ------------------------------------------------------------------------------------------------------------
    2.2.2.9/32        16100      NULL       Loop0             127.0.0.1        E     ---       1500    Active          
    5.5.5.9/32        16200      19200      GE2/0/0          10.1.1.1         I&T   ---       1500    Active          
    6.6.6.9/32        16300      19300      GE2/0/0          10.1.1.1         I&T   ---       1500    Active          
    1.1.1.9/32        16400      3          GE2/0/0          10.1.1.1         I&T   ---       1500    Active          

  4. Establish a GRE tunnel.

    # Configure DeviceA.

    [~DeviceA] interface Tunnel1
    [*DeviceA-Tunnel1] ip address 10.7.1.1 24
    [*DeviceA-Tunnel1] tunnel-protocol gre
    [*DeviceA-Tunnel1] source 10.2.1.1
    [*DeviceA-Tunnel1] destination 10.4.1.2
    [*DeviceA-Tunnel1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] binding tunnel gre
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    # Configure DeviceB.

    [~DeviceB] interface Tunnel1
    [*DeviceB-Tunnel1] ip address 10.7.1.2 24
    [*DeviceB-Tunnel1] tunnel-protocol gre
    [*DeviceB-Tunnel1] source 10.4.1.2
    [*DeviceB-Tunnel1] destination 10.2.1.1
    [*DeviceB-Tunnel1] quit
    [*DeviceB] interface gigabitethernet1/0/0
    [*DeviceB-GigabitEthernet1/0/0] binding tunnel gre
    [*DeviceB-GigabitEthernet1/0/0] quit
    [*DeviceB] commit

  5. Set costs for links to ensure that the link DeviceA → DeviceD is preferentially used for traffic forwarding.

    # Configure DeviceA.

    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] ospf cost 10
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit
    [~DeviceA] interface Tunnel1
    [~DeviceA-Tunnel1] ospf cost 100
    [*DeviceA-Tunnel1] quit
    [*DeviceA] commit

    The configuration of DeviceB is similar to that of DeviceA. For configuration details, see the configuration file.

    # Configure DeviceC.

    [~DeviceC] interface gigabitethernet1/0/0
    [*DeviceC-GigabitEthernet1/0/0] ospf cost 10
    [*DeviceC-GigabitEthernet1/0/0] quit
    [*DeviceC] commit
    [~DeviceC] interface gigabitethernet2/0/0
    [~DeviceC-GigabitEthernet2/0/0] ospf cost 10
    [*DeviceC-GigabitEthernet2/0/0] quit
    [*DeviceC] commit

    The configuration of DeviceD is similar to that of DeviceC. For configuration details, see the configuration file.

  6. Configure TI-LFA FRR.

    # Configure DeviceA.

    [~DeviceA] ospf 1
    [~DeviceA-ospf-1] avoid-microloop frr-protected
    [*DeviceA-ospf-1] avoid-microloop frr-protected rib-update-delay 5000
    [*DeviceA-ospf-1] avoid-microloop segment-routing
    [*DeviceA-ospf-1] avoid-microloop segment-routing rib-update-delay 10000
    [*DeviceA-ospf-1] frr
    [*DeviceA-ospf-1-frr] loop-free-alternate
    [*DeviceA-ospf-1-frr] ti-lfa enable
    [*DeviceA-ospf-1-frr] quit
    [*DeviceA-ospf-1] quit
    [*DeviceA] commit

    After completing the configuration, run the display ospf routing command on DeviceA to check route information.

    [~DeviceA] display ospf 1 routing 1.1.1.9
            OSPF Process 1 with Router ID 1.1.1.1
     
     Destination    : 1.1.1.9/32
     AdverRouter    : 4.4.4.4                  Area                : 0.0.0.1
     Cost           : 10                       Type                : Stub
     NextHop        : 10.1.1.1                 Interface           : GE2/0/0
     Priority       : Medium                   Age                 : 00h28m04s
     Backup NextHop : 10.7.1.2                 Backup Interface    : Tun1
     Backup Type    : TI-LFA LINK              
     BakLabelStack  : -   

    The command output shows that DeviceA has generated a backup link — which uses a GRE tunnel interface as the outbound interface — through FRR computation.

    # Run the display ospf segment-routing routing command on DeviceA to check route information.

    [~DeviceA] display ospf 1 segment-routing routing 1.1.1.9
            OSPF Process 1 with Router ID 1.1.1.1
     
     Destination      : 1.1.1.9/32              
     AdverRouter      : 4.4.4.4                  Area             : 0.0.0.1
     In-Label         : 16400                    Out-Label        : 3                         
     Type             : Stub                     Age              : 00h01m15s
     Prefix-sid       : 400                      Flags            : -|N|-|-|-|-|-|-
     SR-Flags         : -|-|-|-|-|-|-|-           
     NextHop          : 10.1.1.1                 Interface        : GE2/0/0                  
     Backup NextHop   : 10.7.1.2                 Backup Interface : Tun1                      
     Backup Type      : TI-LFA LINK              
     BakLabelStack    : -                                                                        
     BakOutLabel      : 17400

    The command output shows that DeviceA has generated a backup link — which uses a GRE tunnel interface as the outbound interface — through FRR computation.

  7. Configure SR-MPLS BE over GRE.

    # Configure DeviceA.

    [~DeviceA] ospf 1
    [*DeviceA-ospf-1] segment-routing mpls over gre
    [*DeviceA-ospf-1] quit
    [*DeviceA] commit

    The configuration of DeviceB is similar to that of DeviceA. For configuration details, see the configuration file.

  8. Verify the configuration.

    # Run the tracert command on DeviceA to check the connectivity of the SR-MPLS BE tunnel to DeviceD. For example:

    [~DeviceA] tracert lsp segment-routing ip 1.1.1.9 32 version draft2
     LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 1.1.1.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.1/[3 ]
      1      1.1.1.9           9 ms    Egress  

    # Run the shutdown command on GE 2/0/0 of DeviceA to simulate a link failure between DeviceA and DeviceD.

    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] shutdown
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit

    # Run the tracert command on DeviceA again to check the connectivity of the SR-MPLS BE tunnel. For example:

    [~DeviceA] tracert lsp segment-routing ip 1.1.1.9 32 version draft2
     LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 1.1.1.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.7.1.2/[17400 ]
      1      10.7.1.2           125 ms  Transit   10.6.1.2/[18400 ]
      2      10.6.1.2           130 ms  Transit   10.5.1.1/[3 ]
      3      1.1.1.9            24 ms   Egress  

    The preceding command output shows that the SR-MPLS BE tunnel has been switched to the TI-LFA FRR backup path, which uses a GRE tunnel interface as the outbound interface.

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    #
    segment-routing
    #
    ospf 1
     router-id 1.1.1.1
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 16999
     segment-routing mpls over gre
     avoid-microloop frr-protected
     avoid-microloop frr-protected rib-update-delay 5000
     avoid-microloop segment-routing
     avoid-microloop segment-routing rib-update-delay 10000
     frr
      loop-free-alternate
      ti-lfa enable
     area 0.0.0.0
      network 10.2.1.0 0.0.0.255
     area 0.0.0.1
      network 10.1.1.0 0.0.0.255
      network 10.7.1.0 0.0.0.255
    #
    interface Tunnel1
     ip address 10.7.1.1 255.255.255.0
     tunnel-protocol gre
     source 10.2.1.1
     destination 10.4.1.2
     ospf cost 100
    #
    interface LoopBack0
     ip address 2.2.2.9 255.255.255.255
     ospf enable 1 area 0.0.0.1
     ospf prefix-sid index 100
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     binding tunnel gre
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     ospf cost 10
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    #
    segment-routing
    #
    ospf 1
     router-id 2.2.2.2
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 17000 17999
     segment-routing mpls over gre
     area 0.0.0.0
      network 10.4.1.0 0.0.0.255
     area 0.0.0.1
      network 10.6.1.0 0.0.0.255
      network 10.7.1.0 0.0.0.255
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.4.1.2 255.255.255.0
     ospf cost 10
    binding tunnel gre
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.6.1.1 255.255.255.0
     ospf cost 10
    #
    interface Tunnel1
     ip address 10.7.1.2 255.255.255.0
     tunnel-protocol gre
     source 10.4.1.2
     destination 10.2.1.1
     ospf cost 100
    #
    interface LoopBack0
     ip address 5.5.5.9 255.255.255.255
     ospf enable 1 area 0.0.0.1
     ospf prefix-sid index 200
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    #               
    segment-routing
    #
    ospf 1 router-id 3.3.3.3
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 18000 18999
     area 0.0.0.1
      network 10.6.1.0 0.0.0.255
      network 10.5.1.0 0.0.0.255
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.6.1.2 255.255.255.0
     ospf cost 10
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.5.1.2 255.255.255.0
     ospf cost 10
    #
    interface LoopBack0
     ip address 6.6.6.9 255.255.255.255
     ospf enable 1 area 0.0.0.1
     ospf prefix-sid index 300
    #
    return
  • DeviceD configuration file

    #
    sysname DeviceD
    #
    segment-routing
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     ospf cost 10
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.5.1.1 255.255.255.0
     ospf cost 10
    #
    interface LoopBack0
     ip address 1.1.1.9 255.255.255.0
     ospf enable 1 area 0.0.0.1
     ospf prefix-sid index 400
    #
    ospf 1 router-id 4.4.4.4
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 19000 19999
     area 0.0.0.1
      network 10.1.1.0 0.0.0.255
      network 10.5.1.0 0.0.0.255
    #
    return
  • DeviceE configuration file

    #
    sysname DeviceE
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.3.1.1 255.255.255.0
    #
    ospf 1 router-id 5.5.5.5
     opaque-capability enable
     area 0.0.0.0
      network 10.2.1.0 0.0.0.255
      network 10.3.1.0 0.0.0.255
    #
    return
  • DeviceF configuration file

    #
    sysname DeviceF
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.4.1.1 255.255.255.0
    #
    ospf 1 router-id 6.6.6.6
     opaque-capability enable
     area 0.0.0.0
      network 10.3.1.0 0.0.0.255
      network 10.4.1.0 0.0.0.255
    #
    return

Example for Configuring IS-IS SR-MPLS BE over GRE

This section provides an example for configuring IS-IS SR-MPLS BE over GRE, which involves enabling IS-IS on each device, specifying network segments in different processes, configuring GRE tunnels, and enabling IS-IS SR-MPLS BE over GRE.

Networking Requirements

On the network shown in Figure 1-2665, enable IS-IS and configure SR-MPLS BE. In addition, set the cost of the GRE tunnel between DeviceA and DeviceB to 100 and that of the other links to 10. The optimal path from DeviceA to DeviceD is DeviceA → DeviceD. Configure TI-LFA FRR on DeviceA to implement local protection. In this way, if the link between DeviceA and DeviceD fails, traffic can be rapidly switched to the backup path DeviceA → DeviceB → DeviceC → DeviceD. The traffic between DeviceA and DeviceB is forwarded over a GRE tunnel, which is bound to the link DeviceA → DeviceE → DeviceF → DeviceB.

Figure 1-2665 Configuring IS-IS SR-MPLS BE over GRE

Interfaces 1 and 2 in this example represent GE1/0/0 and GE2/0/0, respectively.


Configuration Notes

None

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the entire network to implement interworking between devices.
  2. Configure SR and establish an SR-MPLS BE tunnel.
  3. Establish a GRE tunnel between DeviceA and DeviceB and bind the tunnel to the other process.
  4. Set costs for links to ensure that the link DeviceA → DeviceD is preferentially used for traffic forwarding.
  5. Configure TI-LFA FRR.
  6. Configure IS-IS SR-MPLS BE over GRE.

Data Preparation

To complete the configuration, you need the following data:

  • DeviceA: subnets 10.1.1.0/24 and 10.7.1.0/24 for IS-IS process 1 and subnet 10.2.1.0/24 for IS-IS process 2
  • DeviceB: subnets 10.6.1.0/24 and 10.7.1.0/24 for IS-IS process 1 and subnet 10.4.1.0/24 for IS-IS process 2
  • DeviceC: subnets 10.6.1.0/24 and 10.5.1.0/24 for IS-IS process 1
  • DeviceD: subnets 10.1.1.0/24 and 10.5.1.0/24 for IS-IS process 1
  • DeviceE: subnets 10.2.1.0/24 and 10.3.1.0/24 for IS-IS process 2
  • DeviceF: subnets 10.3.1.0/24 and 10.4.1.0/24 for IS-IS process 2

Procedure

  1. Configure IP addresses for interfaces.

    # Configure DeviceA.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] interface loopback 0
    [*DeviceA-LoopBack0] ip address 2.2.2.9 32
    [*DeviceA-LoopBack0] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] ip address 10.2.1.1 24
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit
    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] ip address 10.1.1.2 255.255.255.0
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit

    The configurations of other devices are similar to the configuration of DeviceA. For configuration details, see the configuration files.

  2. Configure IS-IS to implement interworking.

    # Configure DeviceA.

    [~DeviceA] isis 1 
    [*DeviceA-isis-1] network-entity 10.0000.0000.0001.00
    [*DeviceA-isis-1] quit
    [*DeviceA] isis 2 
    [*DeviceA-isis-2] network-entity 11.0000.0000.0021.00
    [*DeviceA-isis-2] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] isis enable 2
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit
    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] isis enable 1
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit

    The configurations of other devices are similar to the configuration of DeviceA. For configuration details, see the configuration files.

  3. Configure SR and establish an SR-MPLS BE tunnel.

    # Configure DeviceA.

    [~DeviceA] segment-routing 
    [*DeviceA-segment-routing] quit
    [*DeviceA] isis 1
    [*DeviceA-isis-1] cost-style wide
    [*DeviceA-isis-1] segment-routing mpls
    [*DeviceA-isis-1] segment-routing global-block 16000 16999 
    [*DeviceA-isis-1] quit 
    [*DeviceA] interface LoopBack0
    [*DeviceA-LoopBack0] isis enable 1
    [*DeviceA-LoopBack0] isis prefix-sid index 100
    [*DeviceA-LoopBack0] quit
    [*DeviceA] commit

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    The configurations of DeviceB, DeviceC, and DeviceD are similar to the configuration of DeviceA. For configuration details, see the configuration files.

    # After completing the configuration, run the display segment-routing prefix mpls forwarding command on each device. The command output shows that SR-MPLS BE LSPs have been established. DeviceA is used as an example.

    [~DeviceA] display segment-routing prefix mpls forwarding
                     Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
     
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    ------------------------------------------------------------------------------------------------------------
    2.2.2.9/32        16100      NULL       Loop0             127.0.0.1        E     ---       1500    Active          
    5.5.5.9/32        16200      19200      GE2/0/0          10.1.1.1         I&T   ---       1500    Active          
    6.6.6.9/32        16300      19300      GE2/0/0          10.1.1.1         I&T   ---       1500    Active          
    1.1.1.9/32        16400      3          GE2/0/0          10.1.1.1         I&T   ---       1500    Active          

  4. Establish a GRE tunnel.

    # Configure DeviceA.

    [~DeviceA] interface Tunnel1
    [*DeviceA-Tunnel1] ip address 10.7.1.1 24
    [*DeviceA-Tunnel1] tunnel-protocol gre
    [*DeviceA-Tunnel1] source 10.2.1.1
    [*DeviceA-Tunnel1] destination 10.4.1.2
    [*DeviceA-Tunnel1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] binding tunnel gre
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    # Configure DeviceB.

    [~DeviceB] interface Tunnel1
    [*DeviceB-Tunnel1] ip address 10.7.1.2 24
    [*DeviceB-Tunnel1] tunnel-protocol gre
    [*DeviceB-Tunnel1] source 10.4.1.2
    [*DeviceB-Tunnel1] destination 10.2.1.1
    [*DeviceB-Tunnel1] quit
    [*DeviceB] interface gigabitethernet1/0/0
    [*DeviceB-GigabitEthernet1/0/0] binding tunnel gre
    [*DeviceB-GigabitEthernet1/0/0] quit
    [*DeviceB] commit

  5. Set costs for links to ensure that the link DeviceA → DeviceD is preferentially used for traffic forwarding.

    # Configure DeviceA.

    [~DeviceA] interface Tunnel1
    [~DeviceA-Tunnel1] isis cost 100
    [*DeviceA-Tunnel1] quit
    [*DeviceA] commit

    The configuration of DeviceB is similar to that of DeviceA. For configuration details, see the configuration file.

  6. Configure TI-LFA FRR.

    # Configure DeviceA.

    [~DeviceA] isis 1
    [*DeviceA-isis-1] frr
    [*DeviceA-isis-1-frr] loop-free-alternate
    [*DeviceA-isis-1-frr] ti-lfa
    [*DeviceA-isis-1-frr] quit
    [*DeviceA-isis-1] quit
    [*DeviceA] commit

  7. Configure SR-MPLS BE over GRE.

    # Configure DeviceA.

    [~DeviceA] isis 1
    [*DeviceA-isis-1] segment-routing mpls over gre
    [*DeviceA-isis-1] quit
    [*DeviceA] commit

    The configuration of DeviceB is similar to that of DeviceA. For configuration details, see the configuration file.

    After completing the configuration, run the display isis route command on DeviceA to check route information.

    [~DeviceA] display isis 1 route 1.1.1.9 verbose
    
                            Route information for ISIS(1)
                             -----------------------------
    
                            ISIS(1) Level-1 Forwarding Table
                            --------------------------------
    
    
     IPV4 Dest  : 1.1.1.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/L/-
     Priority   : Medium             Age       : 00:25:36
     NextHop    :                    Interface :               ExitIndex :
        10.1.1.1                           GE2/0/0                   0x00000009
     TI-LFA:
     Interface  : Tun2
     NextHop    : 10.7.1.2            LsIndex    : --          ProtectType: L
     Backup Label Stack (Top -> Bottom): {}
    
    
     Prefix-sid : 16400              Weight    : 0             Flags     : -/N/-/-/-/-/A/L
     SR NextHop :                    Interface :               OutLabel  :
        10.1.1.1                           GE2/0/0                   3
     TI-LFA:
     Interface  : Tun2
     NextHop    : 10.7.1.2            LsIndex    : --          ProtectType: L
     Backup Label Stack (Top -> Bottom): {}
    
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut,
                U-Up/Down Bit Set, LP-Local Prefix-Sid
         Protect Type: L-Link Protect, N-Node Protect
    
    
                            ISIS(1) Level-2 Forwarding Table
                            --------------------------------
    
    
     IPV4 Dest  : 1.1.1.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 3             Flags     : -/-/-/-
     Priority   : Medium             Age       : 00:00:00
     NextHop    :                    Interface :               ExitIndex :
                                           -
    
    
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut,
                U-Up/Down Bit Set, LP-Local Prefix-Sid
         Protect Type: L-Link Protect, N-Node Protect

    The command output shows that DeviceA has generated a backup link — which uses a GRE tunnel interface as the outbound interface — through FRR computation.

  8. Verify the configuration.

    # Run the tracert command on DeviceA to check the connectivity of the SR-MPLS BE tunnel to DeviceD. For example:

    [~DeviceA] tracert lsp segment-routing ip 1.1.1.9 32 version draft2
     LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 1.1.1.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.1/[3 ]
      1      1.1.1.9           9 ms    Egress  

    # Run the shutdown command on GE2/0/0 of DeviceA to simulate a link failure between DeviceA and DeviceD.

    [~DeviceA] interface gigabitethernet2/0/0
    [~DeviceA-GigabitEthernet2/0/0] shutdown
    [*DeviceA-GigabitEthernet2/0/0] quit
    [*DeviceA] commit

    # Run the tracert command on DeviceA again to check the connectivity of the SR-MPLS BE tunnel. For example:

    [~DeviceA] tracert lsp segment-routing ip 1.1.1.9 32 version draft2
     LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 1.1.1.9/32 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.7.1.2/[17400 ]
      1      10.7.1.2           125 ms  Transit   10.6.1.2/[18400 ]
      2      10.6.1.2           130 ms  Transit   10.5.1.1/[3 ]
      3      1.1.1.9            24 ms   Egress  

    The preceding command output shows that the SR-MPLS BE tunnel has been switched to the TI-LFA FRR backup path, which uses a GRE tunnel interface as the outbound interface.

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    # 
    segment-routing 
    # 
    isis 1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 16000 16999
     segment-routing mpls over gre
     frr
      loop-free-alternate level-1
      loop-free-alternate level-2
      ti-lfa level-1
      ti-lfa level-2
    # 
    isis 2
     network-entity 11.0000.0000.0021.00
    #
    interface Tunnel1 
     ip address 10.7.1.1 255.255.255.0 
     tunnel-protocol gre 
     source 10.2.1.1 
     destination 10.4.1.2 
     isis enable 1
     isis cost 100 
    # 
    interface LoopBack0 
     ip address 2.2.2.9 255.255.255.255 
     isis enable 1
     isis prefix-sid index 100 
    # 
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.2.1.1 255.255.255.0 
     isis enable 2 
     binding tunnel gre 
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip address 10.1.1.2 255.255.255.0 
     isis enable 1
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    # 
    segment-routing 
    # 
    isis 1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
     segment-routing global-block 17000 17999
     segment-routing mpls over gre
    # 
    isis 2
     network-entity 11.0000.0000.0022.00
    #
    interface GigabitEthernet1/0/0 
     undo shutdown 
     ip address 10.4.1.2 255.255.255.0 
     isis enable 2
     binding tunnel gre 
    # 
    interface GigabitEthernet2/0/0 
     undo shutdown 
     ip address 10.6.1.1 255.255.255.0 
     isis enable 1
    # 
    interface Tunnel1 
     ip address 10.7.1.2 255.255.255.0 
     tunnel-protocol gre 
     source 10.4.1.2 
     destination 10.2.1.1 
     isis enable 1
     isis cost 100 
    # 
    interface LoopBack0 
     ip address 5.5.5.9 255.255.255.255 
     isis enable 1 
     isis prefix-sid index 200 
    # 
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    #                
    segment-routing 
    # 
    isis 1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 18000 18999 
    # 
    interface GigabitEthernet1/0/0 
     undo shutdown 
     ip address 10.6.1.2 255.255.255.0 
     isis enable 1
    # 
    interface GigabitEthernet2/0/0 
     undo shutdown 
     ip address 10.5.1.2 255.255.255.0 
     isis enable 1
    # 
    interface LoopBack0 
     ip address 6.6.6.9 255.255.255.255 
     isis enable 1
     isis prefix-sid index 300 
    # 
    return
  • DeviceD configuration file

    #
    sysname DeviceD
    # 
    segment-routing 
    # 
    isis 1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 19000 19999 
    #
    interface GigabitEthernet1/0/0 
     undo shutdown 
     ip address 10.1.1.1 255.255.255.0 
     isis enable 1 
    # 
    interface GigabitEthernet2/0/0 
     undo shutdown 
     ip address 10.5.1.1 255.255.255.0 
     isis enable 1 
    # 
    interface LoopBack0 
     ip address 1.1.1.9 255.255.255.0 
     isis enable 1 
     isis prefix-sid index 400 
    # 
    return
  • DeviceE configuration file

    #
    sysname DeviceE
    #
    isis 2
    network-entity 11.0000.0000.0005.00
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 2
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.3.1.1 255.255.255.0
     isis enable 2
    #
    return
  • DeviceF configuration file

    #
    sysname DeviceF
    #
    isis 2
     network-entity 11.0000.0000.0006.00
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
     isis enable 2
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.4.1.1 255.255.255.0
     isis enable 2
    #
    return

Configuring Examples for SR-MPLS TE

This section provides configuration examples for SR-MPLS TE.

Example for Configuring L3VPN over SR-MPLS TE

This section provides an example for configuring L3VPN over an SR-MPLS TE tunnel to ensure secure communication between users of the same VPN.

Networking Requirements

On the network shown in Figure 1-2666:
  • CE1 and CE2 belong to vpna.

  • The VPN target used by vpna is 111:1.

To ensure secure communication between VPN users, configure L3VPN over an SR-MPLS TE tunnel.

Figure 1-2666 Configuring L3VPN over an SR-MPLS TE tunnel

Interfaces 1 and 2 in this example represent GE1/0/0 and GE2/0/0, respectively.


Precautions

When you configure L3VPN over an SR-MPLS TE tunnel, note the following:

After a PE interface connected to a CE is bound to a VPN instance, Layer 3 features, such as the IP address and routing protocol, on this interface are automatically deleted. These features can be reconfigured if required.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure PE communication.

  2. On the backbone network, enable MPLS, configure segment routing (SR), establish an SR-MPLS TE tunnel, specify the tunnel IP address, tunnel protocol, and destination IP address, and use explicit paths for path computation.

  3. On each PE, configure a VPN instance, enable the IPv4 address family, and bind each PE interface that connects to a CE to the corresponding VPN instance.

  4. Configure MP-IBGP between PEs to exchange VPN routing information.

  5. Configure EBGP between CEs and PEs to exchange VPN routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the PEs and P

  • VPN target and RD of vpna

  • SRGB range on the PEs and P

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 172.17.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure IGP on the MPLS backbone network for the PEs and P to communicate. IS-IS is used in this example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  3. Configure basic MPLS functions and enable MPLS TE on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] commit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] mpls te
    [*P1-mpls] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] commit

  4. On the backbone network, configure SR, establish an SR-MPLS TE tunnel, specify the tunnel IP address, tunnel protocol, and destination IP address, and use explicit paths for path computation.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 20000
    [*PE1-isis-1] quit

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 16100
    [*PE1-LoopBack1] quit
    [*PE1] commit
    [~PE1] explicit-path pe2
    [*PE1-explicit-path-pe2] next sid label 16200 type prefix
    [*PE1-explicit-path-pe2] next sid label 16300 type prefix
    [*PE1-explicit-path-pe2] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface LoopBack1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.9
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe2
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-2
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 20000
    [*P1-isis-1] quit

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid absolute 16200
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 16000 20000
    [*PE2-isis-1] quit

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 16300
    [*PE2-LoopBack1] quit
    [*PE2] commit
    [~PE2] explicit-path pe1
    [*PE2-explicit-path-pe1] next sid label 16200 type prefix
    [*PE2-explicit-path-pe1] next sid label 16100 type prefix
    [*PE2-explicit-path-pe1] quit
    [*PE2] interface tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface LoopBack1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.9
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe1
    [*PE2-Tunnel1] commit
    [~PE2-Tunnel1] quit

    # After completing the configurations, run the display tunnel-info all command on each PE. The command output shows that the SR-MPLS TE tunnel has been established. The command output on PE1 is used as an example.

    [~PE1] display tunnel-info all
    Tunnel ID            Type                Destination                             Status
    ----------------------------------------------------------------------------------------
    0x000000000300004001 sr-te               3.3.3.9                                 UP  

    # Run the ping command on PE1 to check the SR LSP connectivity. For example:

    [~PE1] ping lsp segment-routing te Tunnel 1
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100  data bytes, press CTRL_C to break
        Reply from 3.3.3.9: bytes=100 Sequence=1 time=7 ms
        Reply from 3.3.3.9: bytes=100 Sequence=2 time=11 ms
        Reply from 3.3.3.9: bytes=100 Sequence=3 time=11 ms
        Reply from 3.3.3.9: bytes=100 Sequence=4 time=9 ms
        Reply from 3.3.3.9: bytes=100 Sequence=5 time=10 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 5/8/11 ms

  5. Establish an MP-IBGP peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After completing the configurations, run the display bgp peer or display bgp vpnv4 all peer command on each PE. The command output shows that the MP-IBGP peer relationship has been set up and is in the Established state. The command output on PE1 is used as an example.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  6. On each PE, create a VPN instance, enable the IPv4 address family in the VPN instance, and bind the PE interface connected to a CE to the VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet1/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Assign IP addresses to interfaces on the CEs, as shown in Figure 1-2666. For configuration details, see the configuration files.

    After completing the configurations, run the display ip vpn-instance verbose command on each PE. The command output shows the configurations of VPN instances. Each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation fails.

  7. Configure a tunnel policy on each PE, and specify SR-MPLS TE as the preferred tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
    [*PE1-tunnel-policy-p1] quit
    [*PE1] commit
    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
    [*PE2-tunnel-policy-p1] quit
    [*PE2] commit
    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  8. Set up EBGP peer relationships between the PEs and CEs.

    # Configure CE1.

    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see the configuration file.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see the configuration file.

    After completing the configurations, run the display bgp vpnv4 vpn-instance peer command on each PE. The command output shows that the BGP peer relationships have been established between the PEs and CEs and are in the Established state.

    The BGP peer relationship between PE1 and CE1 is used as an example.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1            Peers in established state : 1
    
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      10.1.1.1        4   65410  11     9          0     00:06:37   Established  1

  9. Verify the configuration.

    Run the display ip routing-table vpn-instance command on each PE. The command output shows the routes to CE loopback interfaces.

    The command output on PE1 is used as an example.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet1/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet1/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet1/0/0
       10.11.1.1/32    EBGP   255  0        RD    10.1.1.1        GigabitEthernet1/0/0
       10.22.2.2/32    IBGP   255  0        RD    3.3.3.9         Tunnel1
        127.0.0.0/8     Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    The CEs can ping each other. For example, CE1 can ping CE2 (10.22.2.2).

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
     mpls te        
    #               
    explicit-path pe2
     next sid label 16200 type prefix index 1
     next sid label 16300 type prefix index 2
    #               
    segment-routing 
    #               
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 20000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 172.16.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16100 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path pe2
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 1
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
     mpls te        
    #               
    segment-routing 
    #               
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 20000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 172.16.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16200 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
     mpls te        
    #
    explicit-path pe1
     next sid label 16200 type prefix index 1
     next sid label 16100 type prefix index 2
    #               
    segment-routing 
    #               
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 20000
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 172.17.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16300
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path pe1 
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 1
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return

Example for Configuring VPLS over SR-MPLS TE

The public network tunnel to which LDP VPLS recurses can be an SR-MPLS TE tunnel.

Networking Requirements

As shown in Figure 1-2667, CE1 and CE2 belong to the same VPLS network and access the MPLS backbone network through PE1 and PE2, respectively. OSPF is used as IGP on the MPLS backbone network.

LDP VPLS needs to be configured, and an SR-MPLS TE tunnel needs to be established between PE1 and PE2 to carry the VPLS service.

Figure 1-2667 Configuring LDP VPLS to recurse to an SR-MPLS TE tunnel

In this example, interface1, interface2, subinterface1.1, and subinterface2.1 represent GE1/0/0, GE2/0/0, GE1/0/0.1, and GE2/0/0.1, respectively.



Precautions

During the configuration, note the following:

  • PEs belonging to the same VPLS network must have the same VSI ID.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure a routing protocol on the PEs and P of the backbone network for connectivity between these devices and enable MPLS on them.

  2. Set up an SR-MPLS TE tunnel and configure the related tunnel policy. For details about how to establish an SR-MPLS TE tunnel, see the NetEngine 8100 X, NetEngine 8000 X and NetEngine 8000E X Configuration Guide - Segment Routing.

  3. Enable MPLS L2VPN on PEs.

  4. Create a VSI on each PE, specify LDP as the signaling protocol, and bind an AC interface to the VSI.

  5. Configure the VSI to use the SR-MPLS TE tunnel.

Data Preparation

To complete the configuration, you need the following data:

  • OSPF area where SR-MPLS TE is enabled

  • VSI name and VSI ID

  • IP addresses of peers and tunnel policies

  • Name of each interface bound to the VSI

Procedure

  1. Configure IP addresses for interfaces on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.10.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.9 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.10.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.20.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.20.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Enable MPLS and MPLS TE.

    Enable MPLS and MPLS TE in the system view of each node along the tunnel.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] commit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.9
    [*P] mpls
    [*P-mpls] mpls te
    [*P-mpls] quit
    [*P] commit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] commit

  3. Configure OSPF TE on the backbone network.

    # Configure PE1.

    [~PE1] ospf 1
    [*PE1-ospf-1] opaque-capability enable
    [*PE1-ospf-1] area 0.0.0.0
    [*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
    [*PE1-ospf-1-area-0.0.0.0] network 10.10.1.0 0.0.0.255
    [*PE1-ospf-1-area-0.0.0.0] mpls-te enable
    [*PE1-ospf-1-area-0.0.0.0] quit
    [*PE1-ospf-1] quit
    [*PE1] commit

    # Configure the P.

    [~P] ospf 1
    [*P-ospf-1] opaque-capability enable
    [*P-ospf-1] area 0.0.0.0
    [*P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
    [*P-ospf-1-area-0.0.0.0] network 10.10.1.0 0.0.0.255
    [*P-ospf-1-area-0.0.0.0] network 10.20.1.0 0.0.0.255
    [*P-ospf-1-area-0.0.0.0] mpls-te enable
    [*P-ospf-1-area-0.0.0.0] quit
    [*P-ospf-1] quit
    [*P] commit

    # Configure PE2.

    [~PE2] ospf 1
    [*PE2-ospf-1] opaque-capability enable
    [*PE2-ospf-1] area 0.0.0.0
    [*PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
    [*PE2-ospf-1-area-0.0.0.0] network 10.20.1.0 0.0.0.255
    [*PE2-ospf-1-area-0.0.0.0] mpls-te enable
    [*PE2-ospf-1-area-0.0.0.0] quit
    [*PE2-ospf-1] quit
    [*PE2] commit

  4. Configure SR on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] ospf 1
    [*PE1-ospf-1] segment-routing mpls
    [*PE1-ospf-1] segment-routing global-block 16000 47999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-ospf-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] ospf prefix-sid absolute 16100
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure the P.

    [~P] segment-routing
    [*P-segment-routing] quit
    [*P] ospf 1
    [*P-ospf-1] segment-routing mpls
    [*P-ospf-1] segment-routing global-block 16000 47999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-ospf-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] ospf prefix-sid absolute 16300
    [*P-LoopBack1] quit
    [*P] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] ospf 1
    [*PE2-ospf-1] segment-routing mpls
    [*PE2-ospf-1] segment-routing global-block 16000 47999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-ospf-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] ospf prefix-sid absolute 16200
    [*PE2-LoopBack1] quit
    [*PE2] commit

  5. Configure an explicit path.

    # Configure PE1.

    [~PE1] explicit-path path2pe2
    [*PE1-explicit-path-path2pe2] next sid label 16300 type prefix
    [*PE1-explicit-path-path2pe2] next sid label 16200 type prefix
    [*PE1-explicit-path-path2pe2] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] explicit-path path2pe1
    [*PE2-explicit-path-path2pe1] next sid label 16300 type prefix
    [*PE2-explicit-path-path2pe1] next sid label 16100 type prefix
    [*PE2-explicit-path-path2pe1] quit
    [*PE2] commit

  6. Configure a tunnel interface on each device.

    # Create tunnel interfaces on PEs and specify MPLS TE as the tunnel protocol and SR as the signaling protocol.

    # Configure PE1.

    [~PE1] interface Tunnel 10
    [*PE1-Tunnel10] ip address unnumbered interface loopback1
    [*PE1-Tunnel10] tunnel-protocol mpls te
    [*PE1-Tunnel10] destination 3.3.3.9
    [*PE1-Tunnel10] mpls te signal-protocol segment-routing
    [*PE1-Tunnel10] mpls te tunnel-id 100
    [*PE1-Tunnel10] mpls te path explicit-path path2pe2
    [*PE1-Tunnel10] mpls te reserved-for-binding
    [*PE1-Tunnel10] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface Tunnel 10
    [*PE2-Tunnel10] ip address unnumbered interface loopback1
    [*PE2-Tunnel10] tunnel-protocol mpls te
    [*PE2-Tunnel10] destination 1.1.1.9
    [*PE2-Tunnel10] mpls te signal-protocol segment-routing
    [*PE2-Tunnel10] mpls te tunnel-id 100
    [*PE2-Tunnel10] mpls te path explicit-path path2pe1
    [*PE2-Tunnel10] mpls te reserved-for-binding
    [*PE2-Tunnel10] quit
    [*PE2] commit

    Run the display tunnel-info all command in the system view. The TE tunnel whose destination address is the MPLS LSR ID of the peer PE exists. The following example uses the command output on PE1.

    [~PE1] display tunnel-info all
    Tunnel ID                     Type                Destination         Status
    -----------------------------------------------------------------------------
    0x000000000300000001          sr-te               3.3.3.9             UP

  7. Configure remote LDP sessions.

    Set up a remote peer session between PE1 and PE2.

    # Configure PE1.

    [~PE1] mpls ldp
    [*PE1-mpls-ldp] quit
    [*PE1] mpls ldp remote-peer 3.3.3.9
    [*PE1-mpls-ldp-remote-3.3.3.9] remote-ip 3.3.3.9
    [*PE1-mpls-ldp-remote-3.3.3.9] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] mpls ldp
    [*PE2-mpls-ldp] quit
    [*PE2] mpls ldp remote-peer 1.1.1.9
    [*PE2-mpls-ldp-remote-1.1.1.9] remote-ip 1.1.1.9
    [*PE2-mpls-ldp-remote-1.1.1.9] quit
    [*PE2] commit

  8. Configure a tunnel policy.

    # Configure PE1.

    [~PE1] tunnel-policy policy1
    [*PE1-tunnel-policy-policy1] tunnel binding destination 3.3.3.9 te Tunnel10
    [*PE1-tunnel-policy-policy1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy policy1
    [*PE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.9 te Tunnel10
    [*PE2-tunnel-policy-policy1] quit
    [*PE2] commit

  9. Enable MPLS L2VPN on each PE.

    # Configure PE1.

    [~PE1] mpls l2vpn
    [*PE1-l2vpn] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] mpls l2vpn
    [*PE2-l2vpn] quit
    [*PE2] commit

  10. Create a VSI on each PE and configure a tunnel policy for the VSI.

    # Configure PE1.

    [~PE1] vsi a2
    [*PE1-vsi-a2] pwsignal ldp
    [*PE1-vsi-a2-ldp] vsi-id 2
    [*PE1-vsi-a2-ldp] peer 3.3.3.9 tnl-policy policy1
    [*PE1-vsi-a2-ldp] quit
    [*PE1-vsi-a2] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] vsi a2
    [*PE2-vsi-a2] pwsignal ldp
    [*PE2-vsi-a2-ldp] vsi-id 2
    [*PE2-vsi-a2-ldp] peer 1.1.1.9 tnl-policy policy1
    [*PE2-vsi-a2-ldp] quit
    [*PE2-vsi-a2] quit
    [*PE2] commit

  11. Bind an interface to the VSI on each PE.

    # Configure PE1.

    [~PE1] interface gigabitethernet2/0/0.1
    [*PE1-GigabitEthernet2/0/0.1] vlan-type dot1q 10
    [*PE1-GigabitEthernet2/0/0.1] l2 binding vsi a2
    [*PE1-GigabitEthernet2/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface gigabitethernet2/0/0.1
    [*PE2-GigabitEthernet2/0/0.1] vlan-type dot1q 10
    [*PE2-GigabitEthernet2/0/0.1] l2 binding vsi a2
    [*PE2-GigabitEthernet2/0/0.1] quit
    [*PE2] commit

    # Configure CE1.

    [~CE1] interface gigabitethernet1/0/0.1
    [*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE1-GigabitEthernet1/0/0.1] ip address 10.1.1.1 255.255.255.0
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface gigabitethernet1/0/0.1
    [*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE2-GigabitEthernet1/0/0.1] ip address 10.1.1.2 255.255.255.0
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  12. Verify the configuration.

    After completing the configurations, run the display vsi name a2 verbose command on PE1. The command output shows that the VSI named a2 has set up a PW to PE2 and the VSI status is up.

    [~PE1] display vsi name a2 verbose
     ***VSI Name               : a2
        Work Mode              : normal
        Administrator VSI      : no
        Isolate Spoken         : disable
        VSI Index              : 1
        PW Signaling           : ldp
        Member Discovery Style : --
        Bridge-domain Mode     : disable
        PW MAC Learn Style     : unqualify
        Encapsulation Type     : vlan
        MTU                    : 1500
        Diffserv Mode          : uniform
        Service Class          : --
        Color                  : --
        DomainId               : 255
        Domain Name            :
        Ignore AcState         : disable
        P2P VSI                : disable
        Multicast Fast Switch  : disable
        Create Time            : 1 days, 8 hours, 46 minutes, 34 seconds
        VSI State              : up
        Resource Status        : --
    
        VSI ID                 : 2
       *Peer Router ID         : 3.3.3.9
        Negotiation-vc-id      : 2
        Encapsulation Type     : vlan
        primary or secondary   : primary
        ignore-standby-state   : no
        VC Label               : 18
        Peer Type              : dynamic
        Session                : up
        Tunnel ID              : 0x000000000300000001
        Broadcast Tunnel ID    : --
        Broad BackupTunnel ID  : --
        Tunnel Policy Name     : policy1
        CKey                   : 33
        NKey                   : 1610612843
        Stp Enable             : 0
        PwIndex                : 0
        Control Word           : disable
        BFD for PW             : unavailable
    
        Interface Name         : GigabitEthernet2/0/0.1
        State                  : up
        Ac Block State         : unblocked
        Access Port            : false
        Last Up Time           : 2012/09/10 10:14:46
        Total Up Time          : 1 days, 8 hours, 41 minutes, 37 seconds
    
      **PW Information:
    
       *Peer Ip Address        : 3.3.3.9
        PW State               : up
        Local VC Label         : 18
        Remote VC Label        : 18
        Remote Control Word    : disable
        PW Type                : label
        Local  VCCV            : alert lsp-ping bfd
        Remote VCCV            : alert lsp-ping bfd
        Tunnel ID              : 0x000000000300000001
        Broadcast Tunnel ID    : --
        Broad BackupTunnel ID  : --
        Ckey                   : 33
        Nkey                   : 1610612843
        Main PW Token          : 0x0
        Slave PW Token         : 0x0
        Tnl Type               : te
        OutInterface           : Tunnel10
        Backup OutInterface    : --
        Stp Enable             : 0
        Mac Flapping           : 0
        Monitor Group Name     : --
        PW Last Up Time        : 2012/09/11 09:19:12
        PW Total Up Time       : 1 days, 6 hours, 52 minutes, 3 seconds 

    Run the display vsi pw out-interface vsi a2 command on PE1. The command output shows that the outbound interface of the MPLS TE tunnel between 1.1.1.9 and 3.3.3.9 is Tunnel10.

    [~PE1] display vsi pw out-interface vsi a2
    Total: 1
    --------------------------------------------------------------------------------
    Vsi Name                        peer            vcid       interface
    --------------------------------------------------------------------------------
    a2                              3.3.3.9         2          Tunnel10

    Configure CE1 to ping CE2. The ping is successful.

    [~CE1] ping 10.1.1.2
      PING 10.1.1.2: 56  data bytes, press CTRL_C to break
        Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=3 ttl=255 time=94 ms
        Reply from 10.1.1.2: bytes=56 Sequence=4 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=5 ttl=255 time=125 ms
      --- 10.1.1.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 94/118/125 ms

Configuration Files

  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 10.1.1.1 255.255.255.0
    #
    return
  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.9
    #
    mpls
     mpls te
    #
    mpls l2vpn
    #
    vsi a2
     pwsignal ldp
      vsi-id 2
      peer 3.3.3.9 tnl-policy policy1
    #
    explicit-path path2pe2
     next sid label 16300 type prefix index 1
     next sid label 16200 type prefix index 2
    #
    mpls ldp
     #
     ipv4-family
    #
    mpls ldp remote-peer 3.3.3.9
     remote-ip 3.3.3.9
    #
    segment-routing
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.10.1.1 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
    #
    interface GigabitEthernet2/0/0.1
     vlan-type dot1q 10
     l2 binding vsi a2
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     ospf prefix-sid absolute 16100
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.9
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 100
     mpls te path explicit-path path2pe2
    #
    ospf 1
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 47999
     area 0.0.0.0
      network 1.1.1.9 0.0.0.0
      network 10.10.1.0 0.0.0.255
      mpls-te enable
    #
    tunnel-policy policy1
     tunnel binding destination 3.3.3.9 te Tunnel10
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.10.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.20.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     ospf prefix-sid absolute 16300
    #
    ospf 1
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 47999
     area 0.0.0.0
      network 2.2.2.9 0.0.0.0
      network 10.10.1.0 0.0.0.255
      network 10.20.1.0 0.0.0.255
      mpls-te enable
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
     mpls te
    #
    mpls l2vpn
    #
    vsi a2
     pwsignal ldp
      vsi-id 2      
      peer 1.1.1.9 tnl-policy policy1 
    #
    explicit-path path2pe1
     next sid label 16300 type prefix index 1
     next sid label 16100 type prefix index 2
    #
    mpls ldp
     #
     ipv4-family
    #
    mpls ldp remote-peer 1.1.1.9
     remote-ip 1.1.1.9
    #
    segment-routing
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.20.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
    #
    interface GigabitEthernet2/0/0.1
     vlan-type dot1q 10
     l2 binding vsi a2
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     ospf prefix-sid absolute 16200
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 100
     mpls te path explicit-path path2pe1
    #
    ospf 1
     opaque-capability enable
     segment-routing mpls
     segment-routing global-block 16000 47999
     area 0.0.0.0
      network 3.3.3.9 0.0.0.0
      network 10.20.1.0 0.0.0.255
      mpls-te enable
    #
    tunnel-policy policy1
     tunnel binding destination 1.1.1.9 te Tunnel10
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 10.1.1.2 255.255.255.0
    #
    return

Example for Configuring BD EVPN IRB over SR-MPLS TE

This section provides an example for configuring BD EVPN IRB over SR-MPLS TE to transmit services.

Networking Requirements

On the network shown in Figure 1-2668, to allow different sites to communicate over the backbone network, configure EVPN and VPN to transmit Layer 2 and Layer 3 services. If sites belong to the same subnet, create an EVPN instance on each PE to store EVPN routes and implement Layer 2 forwarding based on MAC addresses. If sites belong to different subnets, create a VPN instance on each PE to store VPN routes. In this situation, Layer 2 traffic is terminated, and Layer 3 traffic is forwarded through a Layer 3 gateway. In this example, an SR-MPLS TE tunnel needs to be used to transmit services between the PEs.

Figure 1-2668 Configuring BD EVPN IRB over SR-MPLS TE

interface1, interface2, and sub-interface1.1 in this example represent GigabitEthernet1/0/0, GigabitEthernet2/0/0, and GigabitEthernet1/0/0.1, respectively.


Configuration Precautions

During the configuration process, note the following:

  • For the same EVPN instance, the export VPN target list of one site shares VPN targets with the import VPN target lists of the other sites. Conversely, the import VPN target list of one site shares VPN targets with the export VPN target lists of the other sites.

  • Using each PE's local loopback interface address as its EVPN source address is recommended.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IGP for PE1, PE2, and the P to communicate.

  2. Configure an SR-MPLS TE tunnel on the backbone network.

  3. Configure an EVPN instance and a VPN instance on each PE.

  4. Configure a source address on each PE.

  5. Configure Layer 2 Ethernet sub-interfaces used by PEs to connect to CEs.

  6. Bind the VBDIF interface to a VPN instance on each PE.

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

  8. Establish a BGP EVPN peer relationship between PEs.

  9. Configure CEs and PEs to communicate.

Data Preparation

To complete the configuration, you need the following data:

  • EVPN instance named evrf1 and VPN instance named vpn1

  • RDs (100:1 and 200:1) and RT (1:1) of the EVPN instance evrf1 on PE1 and PE2 vpn1 RDs (100:2 and 200:2) and RT 2:2

Procedure

  1. Configure addresses for interfaces connecting the PEs and P according to Example for Configuring BD EVPN IRB over SR-MPLS TE.

    # Configure PE1.

    <~HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <~HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <~HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure IGP for PE1, PE2, and the P to communicate. IS-IS is used as IGP in this example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check whether an IS-IS neighbor relationship has been established between PE1 and the P and between PE2 and the P. If the Up state is displayed in the command output, the neighbor relationship has been successfully established. Run the display ip routing-table command. The command output shows that the PEs have learned the route to each other's Loopback1 interface.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
                             
      System Id     Interface          Circuit Id        State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0            1111.1111.2222.01  Up   8s       L2       64 
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure an SR-MPLS TE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] explicit-path pe1tope2
    [*PE1-explicit-path-pe1tope2] next sid label 48121 type adjacency
    [*PE1-explicit-path-pe1tope2] next sid label 48120 type adjacency
    [*PE1-explicit-path-pe1tope2] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe1tope2
    [*PE1-Tunnel1] mpls te reserved-for-binding
    [*PE1-Tunnel1] quit
    [*PE1] commit

    In the preceding steps, the next sid label command uses the adjacency labels of PE1 –> P and P –> PE2, which are dynamically generated through IS-IS. Before the configuration, you can run the display segment-routing adjacency mpls forwarding command to query the label value. For example:

    [~PE1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48121     GE2/0/0           10.1.1.2         ISIS-V4     ---       1500      _public_      

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls]  mpls te
    [*P-mpls] quit
    [*P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] traffic-eng level-2
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] commit

    After the configuration is complete, run the display segment-routing adjacency mpls forwarding command to query the automatically generated adjacency label. For example:

    [~P] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48221     GE1/0/0           10.1.1.1         ISIS-V4     ---       1500      _public_ 
    48120     GE2/0/0           10.2.1.2         ISIS-V4     ---       1500      _public_      

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] explicit-path pe2tope1
    [*PE2-explicit-path-pe2tope1] next sid label 48220 type adjacency
    [*PE2-explicit-path-pe2tope1] next sid label 48221 type adjacency
    [*PE2-explicit-path-pe2tope1] quit
    [*PE2] interface tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface loopback 1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.1
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe2tope1
    [*PE2-Tunnel1] mpls te reserved-for-binding
    [*PE2-Tunnel1] quit
    [*PE2] commit

    In the preceding steps, the next sid label command uses the adjacency labels of PE2 –> P and P –> PE1, which are dynamically generated through IS-IS. Before the configuration, you can run the display segment-routing adjacency mpls forwarding command to query the label value. For example:

    [~PE1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48220     GE2/0/0           10.2.1.1         ISIS-V4     ---       1500      _public_      

    After completing the configurations, run the display mpls te tunnel-interface command. The command output shows that the tunnel status is Up.

    The following example uses the command output on PE1.

    [~PE1] display mpls te tunnel-interface
        Tunnel Name       : Tunnel1
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP
        Traffic Switch      : - 
        Session ID          : 1
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 3.3.3.3
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 1
        Tie-Breaking Policy : None                  Metric Type  : None
        Bfd Cap             : None                  
        Reopt               : Disabled              Reopt Freq   : -              
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : -
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0
        Primary Tunnel      : -                     Pri Tunn Sum : -
        Backup Tunnel       : -
        Group Status        : Up                    Oam Status   : None
        IPTN InLabel        : -                     Tunnel BFD Status : -
        BackUp LSP Type     : None                  BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : No                    LSP Control Status : Local control
    
        Path Verification : No
        Entropy Label     : -
        Associated Tunnel Group ID: -             Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                 Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : No
        Color             : - 
    
        Primary LSP ID      : 1.1.1.1:2
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : pe1tope2                         Hop Limit: -
        Record Route        : -                            Record Label : Disabled
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : EXPLICIT
        Create Modify LSP Reason: -

  4. Configure an EVPN instance and a VPN instance on each PE.

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] route-distinguisher 100:1
    [*PE1-evpn-instance-evrf1] vpn-target 1:1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv4-family
    [*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:2
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 evpn
    [*PE1-vpn-instance-vpn1-af-ipv4] quit
    [*PE1-vpn-instance-vpn1] evpn mpls routing-enable
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] bridge-domain 10
    [*PE1-bd10] evpn binding vpn-instance evrf1
    [*PE1-bd10] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 1:1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] ipv4-family
    [*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:2
    [*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 evpn
    [*PE2-vpn-instance-vpn1-af-ipv4] quit
    [*PE2-vpn-instance-vpn1] evpn mpls routing-enable
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] bridge-domain 10
    [*PE2-bd10] evpn binding vpn-instance evrf1
    [*PE2-bd10] quit
    [*PE2] commit

  5. Configure a source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.1
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.3
    [*PE2] commit

  6. Configure Layer 2 Ethernet sub-interfaces used by PEs to connect to CEs.

    # Configure PE1.

    [~PE1] interface GigabitEthernet 1/0/0
    [*PE1-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.1111
    [*PE1-Gigabitethernet1/0/0] quit
    [*PE1] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE1-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE1-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE1-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE1-GigabitEthernet 1/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface GigabitEthernet 1/0/0
    [*PE2-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.2222
    [*PE2-Gigabitethernet1/0/0] quit
    [*PE2] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE2-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE2-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE2-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE2-GigabitEthernet 1/0/0.1] quit
    [*PE2] commit

  7. Bind the VBDIF interface to a VPN instance on each PE.

    # Configure PE1.

    [~PE1] interface Vbdif10
    [*PE1-Vbdif10] ip binding vpn-instance vpn1
    [*PE1-Vbdif10] ip address 192.168.1.1 255.255.255.0
    [*PE1-Vbdif10] arp collect host enable
    [*PE1-Vbdif10] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface Vbdif10
    [*PE2-Vbdif10] ip binding vpn-instance vpn1
    [*PE2-Vbdif10] ip address 192.168.2.1 255.255.255.0
    [*PE2-Vbdif10] arp collect host enable
    [*PE2-Vbdif10] quit
    [*PE2] commit

  8. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy srte
    [*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
    [*PE1-tunnel-policy-srte] quit
    [*PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] tnl-policy srte
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] tnl-policy srte evpn
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srte
    [*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
    [*PE2-tunnel-policy-srte] quit
    [*PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] tnl-policy srte
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] tnl-policy srte evpn
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] commit

  9. Establish a BGP EVPN peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] peer 3.3.3.3 advertise irb
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] ipv4-family vpn-instance vpn1
    [*PE1-bgp-vpn1] import-route direct
    [*PE1-bgp-vpn1] advertise l2vpn evpn
    [*PE1-bgp-vpn1] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] peer 1.1.1.1 advertise irb
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] ipv4-family vpn-instance vpn1
    [*PE2-bgp-vpn1] import-route direct
    [*PE2-bgp-vpn1] advertise l2vpn evpn
    [*PE2-bgp-vpn1] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command. The command output shows that the BGP peer relationships are in the Established state, indicating that BGP peer relationships have been successfully established between the PEs. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
     
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.3         4         100        9        9     0 00:00:02 Established        5

  10. Configure CEs and PEs to communicate.

    # Configure CE1.

    [~CE1] interface GigabitEthernet 1/0/0.1 mode l2
    [*CE1-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
    [*CE1-GigabitEthernet1/0/0.1] rewrite pop single
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface GigabitEthernet 1/0/0.1 mode l2
    [*CE2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
    [*CE2-GigabitEthernet1/0/0.1] rewrite pop single
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  11. Verify the configuration.

    Run the display bgp evpn all routing-table command on each PE. The command output shows EVPN routes sent from the remote PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100
    
     BGP Local router ID is 1.1.1.1
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
     
     EVPN address family:
     Number of A-D Routes: 4
     Route Distinguisher: 100:1
           Network(ESI/EthTagId)                                  NextHop
     *>    0011.1111.1111.1111.1111:0                             127.0.0.1
     Route Distinguisher: 200:1
           Network(ESI/EthTagId)                                  NextHop
     *>i   0011.1111.1111.1111.2222:0                             3.3.3.3
     Route Distinguisher: 1.1.1.1:0
           Network(ESI/EthTagId)                                  NextHop
     *>    0011.1111.1111.1111.1111:4294967295                    127.0.0.1
     Route Distinguisher: 3.3.3.3:0
           Network(ESI/EthTagId)                                  NextHop
     *>i   0011.1111.1111.1111.2222:4294967295                    3.3.3.3
        
    
     EVPN-Instance evrf1:
     Number of A-D Routes: 3
           Network(ESI/EthTagId)                                  NextHop
     *>    0011.1111.1111.1111.1111:0                             127.0.0.1
     *>i   0011.1111.1111.1111.2222:0                             3.3.3.3
     *>i   0011.1111.1111.1111.2222:4294967295                    3.3.3.3
     
     EVPN address family:
     Number of Mac Routes: 2
     Route Distinguisher: 100:1
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>    0:48:00e0-fc12-7890:0:0.0.0.0                          0.0.0.0
     Route Distinguisher: 200:1
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>i   0:48:00e0-fc12-3456:0:0.0.0.0                          3.3.3.3
        
    
     EVPN-Instance evrf1:
     Number of Mac Routes: 2
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>i   0:48:00e0-fc12-3456:0:0.0.0.0                          3.3.3.3
     *>    0:48:00e0-fc12-7890:0:0.0.0.0                          0.0.0.0
     
     EVPN address family:
     Number of Inclusive Multicast Routes: 2
     Route Distinguisher: 100:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    0:32:1.1.1.1                                           127.0.0.1
     Route Distinguisher: 200:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>i   0:32:3.3.3.3                                           3.3.3.3
        
    
     EVPN-Instance evrf1:
     Number of Inclusive Multicast Routes: 2
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    0:32:1.1.1.1                                           127.0.0.1
     *>i   0:32:3.3.3.3                                           3.3.3.3
     
     EVPN address family:
     Number of ES Routes: 2
     Route Distinguisher: 1.1.1.1:0
           Network(ESI)                                           NextHop
     *>    0011.1111.1111.1111.1111                               127.0.0.1
     Route Distinguisher: 3.3.3.3:0
           Network(ESI)                                           NextHop
     *>i   0011.1111.1111.1111.2222                               3.3.3.3
        
    
     EVPN-Instance evrf1:
     Number of ES Routes: 1
           Network(ESI)                                           NextHop
     *>    0011.1111.1111.1111.1111                               127.0.0.1
     
     EVPN address family:
     Number of Ip Prefix Routes: 2
     Route Distinguisher: 100:2
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop
     *>    0:192.168.1.0:24                                       0.0.0.0
     Route Distinguisher: 200:2
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop
     *>i   0:192.168.2.0:24                                       3.3.3.3

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc12-3456:0:0.0.0.0 or display bgp evpn all routing-table prefix-route 0:192.168.2.0:24 command on PE1 to check detailed information about MAC routes or IP prefix routes. The command output shows the name of the tunnel interfaces to which the routes recurse.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc12-3456:0:0.0.0.0
    
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total routes of Route Distinguisher(200:1): 1
     BGP routing table entry information of 0:48:00e0-fc12-3456:0:0.0.0.0:
     Label information (Received/Applied): 54011 48182/NULL
     From: 3.3.3.3 (10.2.1.2) 
     Route Duration: 0d20h42m36s
     Relay IP Nexthop: 10.1.1.2
     Relay IP Out-Interface: GigabitEthernet2/0/0
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0
     Original nexthop: 3.3.3.3
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>, Mac Mobility <flag:1 seq:0 res:0>
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20
     Route Type: 2 (MAC Advertisement Route)
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000
     Not advertised to any peer yet
     
        
    
     EVPN-Instance evrf1:
     Number of Mac Routes: 1
     BGP routing table entry information of 0:48:00e0-fc12-3456:0:0.0.0.0:
     Route Distinguisher: 200:1
     Remote-Cross route
     Label information (Received/Applied): 54011 48182/NULL
     From: 3.3.3.3 (10.2.1.2) 
     Route Duration: 0d20h42m36s
     Relay Tunnel Out-Interface: Tunnel1
     Original nexthop: 3.3.3.3
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>, Mac Mobility <flag:1 seq:0 res:0>
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
     Route Type: 2 (MAC Advertisement Route)
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000
     Not advertised to any peer yet
    [~PE1] display bgp evpn all routing-table prefix-route 0:192.168.2.0:24
    
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total routes of Route Distinguisher(200:2): 1
     BGP routing table entry information of 0:192.168.2.0:24:
     Label information (Received/Applied): 48185/NULL
     From: 3.3.3.3 (10.2.1.2) 
     Route Duration: 0d20h38m31s
     Relay IP Nexthop: 10.1.1.2
     Relay IP Out-Interface: GigabitEthernet2/0/0
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0
     Original nexthop: 3.3.3.3
     Qos information : 0x0
     Ext-Community: RT <2 : 2>
     AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20
     Route Type: 5 (Ip Prefix Route)
     Ethernet Tag ID: 0, IP Prefix/Len: 192.168.2.0/24, ESI: 0000.0000.0000.0000.0000, GW IP Address: 0.0.0.0
     Not advertised to any peer yet

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 100:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 100:2
      apply-label per-instance
      vpn-target 2:2 export-extcommunity evpn
      vpn-target 2:2 import-extcommunity evpn
      tnl-policy srte evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    explicit-path pe1tope2
     next sid label 48121 type adjacency index 1
     next sid label 48120 type adjacency index 2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 192.168.1.1 255.255.255.0
     arp collect host enable
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     esi 0011.1111.1111.1111.1111
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe1tope2  
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise irb
    #
    tunnel-policy srte
     tunnel binding destination 3.3.3.3 te Tunnel1
    #
    evpn source-address 1.1.1.1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 200:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 200:2
      apply-label per-instance
      vpn-target 2:2 export-extcommunity evpn
      vpn-target 2:2 import-extcommunity evpn
      tnl-policy srte evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    explicit-path pe2tope1
     next sid label 48220 type adjacency index 1
     next sid label 48221 type adjacency index 2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 192.168.2.1 255.255.255.0
     arp collect host enable
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     esi 0011.1111.1111.1111.2222
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe2tope1  
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise irb
    #
    tunnel-policy srte
     tunnel binding destination 1.1.1.1 te Tunnel1
    #
    evpn source-address 3.3.3.3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
    #
    return

Example for Configuring EVPN VPLS over SR-MPLS TE (EVPN Instance in BD Mode)

This section provides an example for configuring an SR-MPLS TE tunnel to carry EVPN VPLS services.

Networking Requirements

To allow different sites to communicate over the backbone network shown in Figure 1-2669, configure EVPN to achieve Layer 2 service transmission. If the sites belong to the same subnet, create an EVPN instance on each PE to store EVPN routes and implement Layer 2 forwarding based on matching MAC addresses. In this example, an SR-MPLS TE tunnel needs to be used to transmit services between the PEs.

Figure 1-2669 EVPN VPLS over SR-MPLS TE networking

Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and GE 1/0/0.1, respectively.


Precautions

During the configuration, note the following:

  • Using the local loopback address of each PE as the source address of the PE is recommended.

  • This example uses an explicit path with specified adjacency SIDs to establish the SR-MPLS TE tunnel. Adjacency SIDs that are dynamically generated may change after a device restart, meaning that they need to be reconfigured if adjacency SIDs are specified for the involved explicit path and the involved device is restarted. To facilitate the use of explicit paths, you are advised to run the ipv4 adjacency command to manually configure adjacency SIDs for such paths.
  • In this example, EVPN instances in BD mode need to be configured on the PEs. To achieve this, create a BD on each PE and bind the BD to a specific sub-interface.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other.

  3. Configure an SR-MPLS TE tunnel on the backbone network.

  4. Configure an EVPN instance on each PE.

  5. Configure an EVPN source address on each PE.

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

  8. Establish a BGP EVPN peer relationship between the PEs.

  9. Configure the CEs to communicate with the PEs.

Data Preparation

To complete the configuration, you need the following data:

  • EVPN instance name: evrf1

  • RDs (100:1 and 200:1) and RT (1:1) of the EVPN instance evrf1 on PE1 and PE2

Procedure

  1. Configure addresses for interfaces connecting the PEs and the P according to Figure 1-2669.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check whether an IS-IS neighbor relationship has been established between PE1 and the P and between PE2 and the P. If the Up state is displayed in the command output, the neighbor relationship has been successfully established. You can run the display ip routing-table command to check that the PEs have learned the route to each other's loopback 1 interface.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
                             
      System Id     Interface          Circuit Id        State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0            1111.1111.2222.01  Up   8s       L2       64 
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure an SR-MPLS TE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
    [*PE1-segment-routing] quit
    [*PE1] explicit-path pe1tope2
    [*PE1-explicit-path-pe1tope2] next sid label 330121 type adjacency
    [*PE1-explicit-path-pe1tope2] next sid label 330120 type adjacency
    [*PE1-explicit-path-pe1tope2] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe1tope2
    [*PE1-Tunnel1] mpls te reserved-for-binding
    [*PE1-Tunnel1] quit
    [*PE1] commit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls]  mpls te
    [*P-mpls] quit
    [*P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] traffic-eng level-2
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] segment-routing
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    [*P-segment-routing] quit
    [*P] commit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    [*PE2-segment-routing] quit
    [*PE2] explicit-path pe2tope1
    [*PE2-explicit-path-pe2tope1] next sid label 330220 type adjacency
    [*PE2-explicit-path-pe2tope1] next sid label 330221 type adjacency
    [*PE2-explicit-path-pe2tope1] quit
    [*PE2] interface tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface loopback 1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.1
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe2tope1
    [*PE2-Tunnel1] mpls te reserved-for-binding
    [*PE2-Tunnel1] quit
    [*PE2] commit

    After completing the configurations, run the display mpls te tunnel-interface command. The command output shows that the tunnel state is up.

    The following example uses the command output on PE1.

    [~PE1] display mpls te tunnel-interface
    
        Tunnel Name       : Tunnel1                                
        Signalled Tunnel Name: -                                   
        Tunnel State Desc : CR-LSP is Up                           
        Tunnel Attributes   :                                      
        Active LSP          : Primary LSP                          
        Traffic Switch      : -                                    
        Session ID          : 1                                    
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 3.3.3.3
        Admin State         : UP                    Oper State   : UP                       
        Signaling Protocol  : Segment-Routing                      
        FTid                : 1                                    
        Tie-Breaking Policy : None                  Metric Type  : TE 
        Bfd Cap             : None                                 
        Reopt               : Disabled              Reopt Freq   : - 
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : - 
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : -                                    
        Offload Explicit Path Name: -                              
        Tunnel Group        : Primary                              
        Interfaces Protected: - 
        Excluded IP Address : -                                    
        Referred LSP Count  : 0                                    
        Primary Tunnel      : -                     Pri Tunn Sum : - 
        Backup Tunnel       : -                                    
        Group Status        : Up                    Oam Status   : None 
        IPTN InLabel        : -                     Tunnel BFD Status : -                                                          
        BackUp LSP Type     : None                  BestEffort   : -                                
        Secondary HopLimit  : -                                    
        BestEffort HopLimit  : -                                   
        Secondary Explicit Path Name: -                            
        Secondary Affinity Prop/Mask: 0x0/0x0                      
        BestEffort Affinity Prop/Mask: -                           
        IsConfigLspConstraint: -                                   
        Hot-Standby Revertive Mode:  Revertive                     
        Hot-Standby Overlap-path:  Disabled                        
        Hot-Standby Switch State:  CLEAR                           
        Bit Error Detection:  Disabled                             
        Bit Error Detection Switch Threshold:  -                   
        Bit Error Detection Resume Threshold:  -                   
        Ip-Prefix Name    : -                                      
        P2p-Template Name : -                                      
        PCE Delegate      : No                     LSP Control Status : Local control
        Path Verification : No                                     
        Entropy Label     : -                                      
        Associated Tunnel Group ID: -              Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                  Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -                         
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : -
        Color             : - 
    
        Primary LSP ID      : 1.1.1.1:1                            
        LSP State           : UP                    LSP Type     : Primary 
        Setup Priority      : 7                     Hold Priority: 7 
        IncludeAll          : 0x0                                  
        IncludeAny          : 0x0                                  
        ExcludeAny          : 0x0                                  
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:                          
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0 
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0 
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:                              
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0 
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0 
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0 
        Explicit Path Name  : pe1tope2                         Hop Limit: - 
        Record Route        : -                     Record Label : -
        Route Pinning       : -                                    
        FRR Flag            : -                                    
        IdleTime Remain     : -                                    
        BFD Status          : -                                    
        Soft Preemption     : -                                    
        Reroute Flag        : -                                    
        Pce Flag            : Normal                               
        Path Setup Type     : EXPLICIT                             
        Create Modify LSP Reason: -  

  4. Configure an EVPN instance on each PE.

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] route-distinguisher 100:1
    [*PE1-evpn-instance-evrf1] vpn-target 1:1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] bridge-domain 10
    [*PE1-bd10] evpn binding vpn-instance evrf1
    [*PE1-bd10] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 1:1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] bridge-domain 10
    [*PE2-bd10] evpn binding vpn-instance evrf1
    [*PE2-bd10] quit
    [*PE2] commit

  5. Configure an EVPN source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.1
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.3
    [*PE2] commit

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

    # Configure PE1.

    [~PE1] interface GigabitEthernet 1/0/0
    [*PE1-Gigabitethernet1/0/0] undo shutdown
    [*PE1-Gigabitethernet1/0/0] quit
    [*PE1] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE1-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE1-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE1-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE1-GigabitEthernet 1/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface GigabitEthernet 1/0/0
    [*PE2-Gigabitethernet1/0/0] undo shutdown
    [*PE2-Gigabitethernet1/0/0] quit
    [*PE2] interface GigabitEthernet 1/0/0.1 mode l2
    [*PE2-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
    [*PE2-GigabitEthernet 1/0/0.1] rewrite pop single
    [*PE2-GigabitEthernet 1/0/0.1] bridge-domain 10
    [*PE2-GigabitEthernet 1/0/0.1] quit
    [*PE2] commit

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy srte
    [*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
    [*PE1-tunnel-policy-srte] quit
    [*PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] tnl-policy srte
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srte
    [*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
    [*PE2-tunnel-policy-srte] quit
    [*PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] tnl-policy srte
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  8. Establish a BGP EVPN peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] peer 3.3.3.3 advertise irb
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] peer 1.1.1.1 advertise irb
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command to check whether the BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been successfully established. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
    
     BGP local router ID : 10.1.1.1                                
     Local AS number : 100                                         
     Total number of peers : 1                 Peers in established state : 1                                                           
    
      Peer                             V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv 
      3.3.3.3                          4         100       43       44     0 00:34:03 Established        1 

  9. Configure the CEs to communicate with the PEs.

    # Configure CE1.

    [~CE1] interface GigabitEthernet 1/0/0.1
    [*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE1-GigabitEthernet1/0/0.1] ip address 172.16.1.1 24
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface GigabitEthernet 1/0/0.1
    [*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE2-GigabitEthernet1/0/0.1] ip address 172.16.1.2 24
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  10. Verify the configuration.

    Run the display bgp evpn all routing-table command on each PE. The command output shows EVPN routes sent from the remote PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100     
    
     BGP Local router ID is 10.1.1.1                              
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,                 
                   h - history,  i - internal, s - suppressed, S - Stale                             
                   Origin : i - IGP, e - EGP, ? - incomplete      
    
    
     EVPN address family:      
     Number of Mac Routes: 2   
     Route Distinguisher: 100:1                                   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop                            
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0                            
     Route Distinguisher: 200:1                                   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop                            
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3                            
    
    
     EVPN-Instance evrf1:      
     Number of Mac Routes: 2   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop 
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0                            
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3                            
    
     EVPN address family:      
     Number of Inclusive Multicast Routes: 2                      
     Route Distinguisher: 100:1                                   
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>    0:32:1.1.1.1                                           127.0.0.1                          
     Route Distinguisher: 200:1                                   
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>i   0:32:3.3.3.3                                           3.3.3.3                            
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 2                      
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>    0:32:1.1.1.1                                           127.0.0.1                          
     *>i   0:32:3.3.3.3                                           3.3.3.3

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 command on PE1 to check details about the specified MAC route. The command output shows the name of the tunnel interface to which the route recurses.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 
     BGP local router ID : 10.1.1.1                               
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                           
     Label information (Received/Applied): 48122/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m40s                                  
     Relay IP Nexthop: 10.1.1.2                                   
     Relay IP Out-Interface: GigabitEthernet2/0/0                        
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                    
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20     
     Route Type: 2 (MAC Advertisement Route)                      
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000    
     Not advertised to any peer yet                               
    
    
    
     EVPN-Instance evrf1: 
     Number of Mac Routes: 1   
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                           
     Route Distinguisher: 200:1                                   
     Remote-Cross route        
     Label information (Received/Applied): 48122/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m40s                                  
     Relay Tunnel Out-Interface: Tunnel1                          
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255                 
     Route Type: 2 (MAC Advertisement Route)                      
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000     
     Not advertised to any peer yet  

    Run the display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3 command on PE1 to check details about the specified inclusive multicast route. The command output shows the name of the tunnel interface to which the route recurses.

    [~PE1] display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3
    
     BGP local router ID : 10.1.1.1                               
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                
     BGP routing table entry information of 0:32:3.3.3.3:         
     Label information (Received/Applied): 48123/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h04m49s                                  
     Relay IP Nexthop: 10.1.1.2                                   
     Relay IP Out-Interface: GigabitEthernet2/0/0                        
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                    
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20      
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3 
     Route Type: 3 (Inclusive Multicast Route)                    
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                 
     Not advertised to any peer yet                               
    
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 1                      
     BGP routing table entry information of 0:32:3.3.3.3:         
     Route Distinguisher: 200:1                                   
     Remote-Cross route        
     Label information (Received/Applied): 48123/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h04m45s                                  
     Relay Tunnel Out-Interface: Tunnel1                          
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255                
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3               
     Route Type: 3 (Inclusive Multicast Route)                    
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                 
     Not advertised to any peer yet 

    Run the ping command on the CEs. The command output shows that the CEs belonging to the same VPN instance can ping each other. For example:

    [~CE1] ping 172.16.1.2                                     
      PING 172.16.1.2: 56  data bytes, press CTRL_C to break       
        Reply from 172.16.1.2: bytes=56 Sequence=1 ttl=255 time=11 ms                                      
        Reply from 172.16.1.2: bytes=56 Sequence=2 ttl=255 time=9 ms                              
        Reply from 172.16.1.2: bytes=56 Sequence=3 ttl=255 time=4 ms                             
        Reply from 172.16.1.2: bytes=56 Sequence=4 ttl=255 time=6 ms                             
        Reply from 172.16.1.2: bytes=56 Sequence=5 ttl=255 time=7 ms                          
    
      --- 172.16.1.2 ping statistics --- 
        5 packet(s) transmitted       
        5 packet(s) received                 
        0.00% packet loss                     
        round-trip min/avg/max = 4/7/11 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 100:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    explicit-path pe1tope2
     next sid label 330121 type adjacency index 1
     next sid label 330120 type adjacency index 2
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe1tope2  
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise irb
    #
    tunnel-policy srte
     tunnel binding destination 3.3.3.3 te Tunnel1
    #
    evpn source-address 1.1.1.1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
     ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 200:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    explicit-path pe2tope1
     next sid label 330220 type adjacency index 1
     next sid label 330221 type adjacency index 2
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe2tope1  
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise irb
    #
    tunnel-policy srte
     tunnel binding destination 1.1.1.1 te Tunnel1
    #
    evpn source-address 3.3.3.3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.1 255.255.255.0
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.2 255.255.255.0
    #
    return

Example for Configuring EVPN VPLS over SR-MPLS TE (Common EVPN Instance)

This section provides an example for configuring an SR-MPLS TE tunnel to carry EVPN VPLS services.

Networking Requirements

To allow different sites to communicate over the backbone network shown in Figure 1-2670, configure EVPN to achieve Layer 2 service transmission. If the sites belong to the same subnet, create an EVPN instance on each PE to store EVPN routes and implement Layer 2 forwarding based on matching MAC addresses. In this example, an SR-MPLS TE tunnel needs to be used to transmit services between the PEs.

Figure 1-2670 EVPN VPLS over SR-MPLS TE networking

Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and GE 1/0/0.1, respectively.


Precautions

During the configuration, note the following:

  • Using the local loopback address of each PE as the source address of the PE is recommended.

  • This example uses an explicit path with specified adjacency SIDs to establish the SR-MPLS TE tunnel. Adjacency SIDs that are dynamically generated may change after a device restart, meaning that they need to be reconfigured if adjacency SIDs are specified for the involved explicit path and the involved device is restarted. To facilitate the use of explicit paths, you are advised to run the ipv4 adjacency command to manually configure adjacency SIDs for such paths.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces.

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other.

  3. Configure an SR-MPLS TE tunnel on the backbone network.

  4. Configure an EVPN instance on each PE.

  5. Configure an EVPN source address on each PE.

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

  8. Establish a BGP EVPN peer relationship between the PEs.

  9. Configure the CEs to communicate with the PEs.

Data Preparation

To complete the configuration, you need the following data:

  • EVPN instance name: evrf1

  • RDs (100:1 and 200:1) and RT (1:1) of the EVPN instance evrf1 on PE1 and PE2

Procedure

  1. Configure addresses for interfaces connecting the PEs and the P according to Figure 1-2670.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure an IGP to enable PE1, PE2, and the P to communicate with each other. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check whether an IS-IS neighbor relationship has been established between PE1 and the P and between PE2 and the P. If the Up state is displayed in the command output, the neighbor relationship has been successfully established. You can run the display ip routing-table command to check that the PEs have learned the route to each other's loopback 1 interface.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
                             
      System Id     Interface          Circuit Id        State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0            1111.1111.2222.01  Up   8s       L2       64 
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure an SR-MPLS TE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
    [*PE1-segment-routing] quit
    [*PE1] explicit-path pe1tope2
    [*PE1-explicit-path-pe1tope2] next sid label 330121 type adjacency
    [*PE1-explicit-path-pe1tope2] next sid label 330120 type adjacency
    [*PE1-explicit-path-pe1tope2] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe1tope2
    [*PE1-Tunnel1] mpls te reserved-for-binding
    [*PE1-Tunnel1] quit
    [*PE1] commit

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls]  mpls te
    [*P-mpls] quit
    [*P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] traffic-eng level-2
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] segment-routing
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    [*P-segment-routing] quit
    [*P] commit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    [*PE2-segment-routing] quit
    [*PE2] explicit-path pe2tope1
    [*PE2-explicit-path-pe2tope1] next sid label 330220 type adjacency
    [*PE2-explicit-path-pe2tope1] next sid label 330221 type adjacency
    [*PE2-explicit-path-pe2tope1] quit
    [*PE2] interface tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface loopback 1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.1
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe2tope1
    [*PE2-Tunnel1] mpls te reserved-for-binding
    [*PE2-Tunnel1] quit
    [*PE2] commit

    After completing the configurations, run the display mpls te tunnel-interface command. The command output shows that the tunnel state is up.

    The following example uses the command output on PE1.

    [~PE1] display mpls te tunnel-interface
    
        Tunnel Name       : Tunnel1                                
        Signalled Tunnel Name: -                                   
        Tunnel State Desc : CR-LSP is Up                           
        Tunnel Attributes   :                                      
        Active LSP          : Primary LSP                          
        Traffic Switch      : -                                    
        Session ID          : 1                                    
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 3.3.3.3
        Admin State         : UP                    Oper State   : UP                       
        Signaling Protocol  : Segment-Routing                      
        FTid                : 1                                    
        Tie-Breaking Policy : None                  Metric Type  : TE 
        Bfd Cap             : None                                 
        Reopt               : Disabled              Reopt Freq   : - 
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : - 
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : -                                    
        Offload Explicit Path Name: -                              
        Tunnel Group        : Primary                              
        Interfaces Protected: - 
        Excluded IP Address : -                                    
        Referred LSP Count  : 0                                    
        Primary Tunnel      : -                     Pri Tunn Sum : - 
        Backup Tunnel       : -                                    
        Group Status        : Up                    Oam Status   : None 
        IPTN InLabel        : -                     Tunnel BFD Status : -                                                          
        BackUp LSP Type     : None                  BestEffort   : -                                
        Secondary HopLimit  : -                                    
        BestEffort HopLimit  : -                                   
        Secondary Explicit Path Name: -                            
        Secondary Affinity Prop/Mask: 0x0/0x0                      
        BestEffort Affinity Prop/Mask: -                           
        IsConfigLspConstraint: -                                   
        Hot-Standby Revertive Mode:  Revertive                     
        Hot-Standby Overlap-path:  Disabled                        
        Hot-Standby Switch State:  CLEAR                           
        Bit Error Detection:  Disabled                             
        Bit Error Detection Switch Threshold:  -                   
        Bit Error Detection Resume Threshold:  -                   
        Ip-Prefix Name    : -                                      
        P2p-Template Name : -                                      
        PCE Delegate      : No                     LSP Control Status : Local control
        Path Verification : No                                     
        Entropy Label     : -                                      
        Associated Tunnel Group ID: -              Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                  Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -                         
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : -
        Color             : - 
    
        Primary LSP ID      : 1.1.1.1:1                            
        LSP State           : UP                    LSP Type     : Primary 
        Setup Priority      : 7                     Hold Priority: 7 
        IncludeAll          : 0x0                                  
        IncludeAny          : 0x0                                  
        ExcludeAny          : 0x0                                  
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:                          
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0 
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0 
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:                              
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0 
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0 
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0 
        Explicit Path Name  : pe1tope2                         Hop Limit: - 
        Record Route        : -                     Record Label : -
        Route Pinning       : -                                    
        FRR Flag            : -                                    
        IdleTime Remain     : -                                    
        BFD Status          : -                                    
        Soft Preemption     : -                                    
        Reroute Flag        : -                                    
        Pce Flag            : Normal                               
        Path Setup Type     : EXPLICIT                             
        Create Modify LSP Reason: -  

  4. Configure an EVPN instance on each PE.

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1
    [*PE1-evpn-instance-evrf1] route-distinguisher 100:1
    [*PE1-evpn-instance-evrf1] vpn-target 1:1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn vpn-instance evrf1
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 1:1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  5. Configure an EVPN source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.1
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.3
    [*PE2] commit

  6. Configure Layer 2 Ethernet sub-interfaces connecting the PEs to the CEs.

    # Configure PE1.

    [~PE1] interface GigabitEthernet 1/0/0
    [*PE1-Gigabitethernet1/0/0] undo shutdown
    [*PE1-Gigabitethernet1/0/0] quit
    [*PE1] interface GigabitEthernet 1/0/0.1
    [*PE1-GigabitEthernet 1/0/0.1] vlan-type dot1q 10
    [*PE1-GigabitEthernet 1/0/0.1] evpn binding vpn-instance evrf1
    [*PE1-GigabitEthernet 1/0/0.1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface GigabitEthernet 1/0/0
    [*PE2-Gigabitethernet1/0/0] undo shutdown
    [*PE2-Gigabitethernet1/0/0] quit
    [*PE2] interface GigabitEthernet 1/0/0.1
    [*PE2-GigabitEthernet 1/0/0.1] vlan-type dot1q 10
    [*PE2-GigabitEthernet 1/0/0.1] evpn binding vpn-instance evrf1
    [*PE2-GigabitEthernet 1/0/0.1] quit
    [*PE2] commit

  7. Configure and apply a tunnel policy to enable EVPN service recursion to the SR-MPLS TE tunnel.

    # Configure PE1.

    [~PE1] tunnel-policy srte
    [*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
    [*PE1-tunnel-policy-srte] quit
    [*PE1] evpn vpn-instance evrf1
    [*PE1-evpn-instance-evrf1] tnl-policy srte
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srte
    [*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
    [*PE2-tunnel-policy-srte] quit
    [*PE2] evpn vpn-instance evrf1
    [*PE2-evpn-instance-evrf1] tnl-policy srte
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  8. Establish a BGP EVPN peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command to check whether the BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been successfully established. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
    
     BGP local router ID : 10.1.1.1                                
     Local AS number : 100                                         
     Total number of peers : 1                 Peers in established state : 1                                                           
    
      Peer                             V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv 
      3.3.3.3                          4         100       43       44     0 00:34:03 Established        1 

  9. Configure the CEs to communicate with the PEs.

    # Configure CE1.

    [~CE1] interface GigabitEthernet 1/0/0.1
    [*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE1-GigabitEthernet1/0/0.1] ip address 172.16.1.1 24
    [*CE1-GigabitEthernet1/0/0.1] quit
    [*CE1] commit

    # Configure CE2.

    [~CE2] interface GigabitEthernet 1/0/0.1
    [*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
    [*CE2-GigabitEthernet1/0/0.1] ip address 172.16.1.2 24
    [*CE2-GigabitEthernet1/0/0.1] quit
    [*CE2] commit

  10. Verify the configuration.

    Run the display bgp evpn all routing-table command on each PE. The command output shows EVPN routes sent from the remote PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100     
    
     BGP Local router ID is 10.1.1.1                              
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,                 
                   h - history,  i - internal, s - suppressed, S - Stale                             
                   Origin : i - IGP, e - EGP, ? - incomplete      
    
    
     EVPN address family:      
     Number of Mac Routes: 2   
     Route Distinguisher: 100:1                                   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop                            
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0                            
     Route Distinguisher: 200:1                                   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop                            
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3                            
    
    
     EVPN-Instance evrf1:      
     Number of Mac Routes: 2   
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop 
     *>    0:48:00e0-fc21-0302:0:0.0.0.0                          0.0.0.0                            
     *>i   0:48:00e0-fc61-0300:0:0.0.0.0                          3.3.3.3                            
    
     EVPN address family:      
     Number of Inclusive Multicast Routes: 2                      
     Route Distinguisher: 100:1                                   
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>    0:32:1.1.1.1                                           127.0.0.1                          
     Route Distinguisher: 200:1                                   
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>i   0:32:3.3.3.3                                           3.3.3.3
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 2                      
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop                            
     *>    0:32:1.1.1.1                                           127.0.0.1                          
     *>i   0:32:3.3.3.3                                           3.3.3.3

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 command on PE1 to check details about the specified MAC route. The command output shows the name of the tunnel interface to which the route recurses.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc61-0300:0:0.0.0.0 
     BGP local router ID : 10.1.1.1                               
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                           
     Label information (Received/Applied): 48122/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m40s                                  
     Relay IP Nexthop: 10.1.1.2                                   
     Relay IP Out-Interface: GigabitEthernet2/0/0                        
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                    
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20     
     Route Type: 2 (MAC Advertisement Route)                      
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000    
     Not advertised to any peer yet                               
    
    
    
     EVPN-Instance evrf1: 
     Number of Mac Routes: 1   
     BGP routing table entry information of 0:48:00e0-fc61-0300:0:0.0.0.0:                           
     Route Distinguisher: 200:1                                   
     Remote-Cross route        
     Label information (Received/Applied): 48122/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h01m40s                                  
     Relay Tunnel Out-Interface: Tunnel1                          
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255                 
     Route Type: 2 (MAC Advertisement Route)                      
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc61-0300/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000     
     Not advertised to any peer yet  

    Run the display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3 command on PE1 to check details about the specified inclusive multicast route. The command output shows the name of the tunnel interface to which the route recurses.

    [~PE1] display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3
    
     BGP local router ID : 10.1.1.1                               
     Local AS number : 100     
     Total routes of Route Distinguisher(200:1): 1                
     BGP routing table entry information of 0:32:3.3.3.3:         
     Label information (Received/Applied): 48123/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h04m49s                                  
     Relay IP Nexthop: 10.1.1.2                                   
     Relay IP Out-Interface: GigabitEthernet2/0/0                        
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                    
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20      
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3 
     Route Type: 3 (Inclusive Multicast Route)                    
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                 
     Not advertised to any peer yet                               
    
    
    
     EVPN-Instance evrf1:      
     Number of Inclusive Multicast Routes: 1                      
     BGP routing table entry information of 0:32:3.3.3.3:         
     Route Distinguisher: 200:1                                   
     Remote-Cross route        
     Label information (Received/Applied): 48123/NULL             
     From: 3.3.3.3 (10.2.1.2)  
     Route Duration: 0d00h04m45s                                  
     Relay Tunnel Out-Interface: Tunnel1                          
     Original nexthop: 3.3.3.3 
     Qos information : 0x0     
     Ext-Community: RT <1 : 1>, SoO <3.3.3.3 : 0>  
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255                
     PMSI: Flags 0, Ingress Replication, Label 0:0:0(48123), Tunnel Identifier:3.3.3.3               
     Route Type: 3 (Inclusive Multicast Route)                    
     Ethernet Tag ID: 0, Originator IP:3.3.3.3/32                 
     Not advertised to any peer yet 

    Run the ping command on the CEs. The command output shows that the CEs belonging to the same VPN instance can ping each other. For example:

    [~CE1] ping 172.16.1.2                                     
      PING 172.16.1.2: 56  data bytes, press CTRL_C to break       
        Reply from 172.16.1.2: bytes=56 Sequence=1 ttl=255 time=11 ms                           
        Reply from 172.16.1.2: bytes=56 Sequence=2 ttl=255 time=9 ms                          
        Reply from 172.16.1.2: bytes=56 Sequence=3 ttl=255 time=4 ms                             
        Reply from 172.16.1.2: bytes=56 Sequence=4 ttl=255 time=6 ms                                 
        Reply from 172.16.1.2: bytes=56 Sequence=5 ttl=255 time=7 ms                          
    
      --- 172.16.1.2 ping statistics ---
        5 packet(s) transmitted         
        5 packet(s) received            
        0.00% packet loss          
        round-trip min/avg/max = 4/7/11 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1
     route-distinguisher 100:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    explicit-path pe1tope2
     next sid label 330121 type adjacency index 1
     next sid label 330120 type adjacency index 2
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     evpn binding vpn-instance evrf1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe1tope2  
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
    #
    tunnel-policy srte
     tunnel binding destination 3.3.3.3 te Tunnel1
    #
    evpn source-address 1.1.1.1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
     ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1
     route-distinguisher 200:1
     tnl-policy srte
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    explicit-path pe2tope1
     next sid label 330220 type adjacency index 1
     next sid label 330221 type adjacency index 2
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     evpn binding vpn-instance evrf1
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe2tope1  
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
    #
    tunnel-policy srte
     tunnel binding destination 1.1.1.1 te Tunnel1
    #
    evpn source-address 3.3.3.3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.1 255.255.255.0
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
    #
    interface GigabitEthernet1/0/0.1
     vlan-type dot1q 10
     ip address 172.16.1.2 255.255.255.0
    #
    return

Example for Configuring EVPN L3VPNv6 over SR-MPLS TE

This section provides an example for configuring EVPN L3VPNv6 to recurse traffic over an SR-MPLS TE tunnel.

Networking Requirements

On the network shown in Figure 1-2671, EVPN L3VPNv6 needs to be configured to transmit IPv6 Layer 3 services between CEs over the backbone network. In this example, PEs transmit service traffic over SR-MPLS TE tunnels.

Figure 1-2671 Configuring EVPN L3VPNv6 over SR-MPLS TE

Interfaces 1 and 2 in this example represent GigabitEthernet1/0/0 and GigabitEthernet2/0/0, respectively.


Configuration Notes

During the configuration process, note the following:

  • For the same VPN instance, the export VPN target list of a site shares VPN targets with the import VPN target lists of the other sites. Conversely, the import VPN target list of a site shares VPN targets with the export VPN target lists of the other sites.

  • Using the local loopback interface address of each PE as the source address of the PE is recommended.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure an IGP for PE1, PE2, and the P to communicate with each other.

  2. Configure an SR-MPLS TE tunnel on the backbone network.

  3. Configure an EVPN L3VPN instance on each PE and bind an interface to the EVPN L3VPN instance.

  4. Establish BGP EVPN peer relationships between PEs.

  5. Configure and apply a tunnel policy so that EVPN routes can recurse to SR-MPLS TE tunnels.

  6. Establish VPN BGP peer relationships between PEs and CEs.

Data Preparation

To complete the configuration, you need the following data:

  • Name of the IPv6 VPN instance: vpn1

  • RD (100:1) and RT (1:1) of the IPv6 VPN instance

Procedure

  1. Configure interface addresses of each device.

    # Configure PE1.

    <~HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [~HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    <~HUAWEI> system-view
    [~HUAWEI] sysname P
    [~HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 2.2.2.2 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    <~HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [~HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.3 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure an IGP for PE1, PE2, and the P to communicate with each other. IS-IS is used as the IGP in this example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 00.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface GigabitEthernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 00.1111.1111.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface GigabitEthernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    After completing the configurations, run the display isis peer command to check that the status of the IS-IS neighbor relationship between PE1, PE2, and the P is Up. Run the display ip routing-table command. The command output shows that the PEs have learned the routes to Loopback1 of each other.

    The following example uses the command output on PE1.

    [~PE1] display isis peer
                              Peer information for ISIS(1)
                             
      System Id     Interface         Circuit Id         State HoldTime Type     PRI
    --------------------------------------------------------------------------------
    1111.1111.2222  GE2/0/0           1111.1111.2222.02  Up    7s       L2       64
    
    Total Peer(s): 1
    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 11       Routes : 11        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            1.1.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1
            2.2.2.2/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0
            3.3.3.3/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure an SR-MPLS TE tunnel on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid absolute 153700
    [*PE1-LoopBack1] quit
    [*PE1] explicit-path pe1tope2
    [*PE1-explicit-path-pe1tope2] next sid label 48140 type adjacency
    [*PE1-explicit-path-pe1tope2] next sid label 48141 type adjacency
    [*PE1-explicit-path-pe1tope2] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe1tope2
    [*PE1-Tunnel1] mpls te reserved-for-binding
    [*PE1-Tunnel1] quit
    [*PE1] commit

    The next sid label command uses the adjacency label from PE1 to P which is dynamically generated using IS-IS. This adjacency label can be obtained using the display segment-routing adjacency mpls forwarding command.

    [~PE1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48140     GE2/0/0           10.1.1.2         ISIS-V4     ---       1500      _public_

    # Configure the P.

    [~P] mpls lsr-id 2.2.2.2
    [*P] mpls
    [*P-mpls]  mpls te
    [*P-mpls] quit
    [*P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] traffic-eng level-2
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153710
    [*P-LoopBack1] quit
    [*P] commit

    After the configuration is complete, yan can view the adjacency labels by using the display segment-routing adjacency mpls forwarding command.

    [~P] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------------
    48241     GE1/0/0           10.1.1.1         ISIS-V4     ---       1500      _public_ 
    48141     GE2/0/0           10.2.1.2         ISIS-V4     ---       1500      _public_      

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.3
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid absolute 153720
    [*PE2-LoopBack1] quit
    [*PE2] explicit-path pe2tope1
    [*PE2-explicit-path-pe2tope1] next sid label 48240 type adjacency
    [*PE2-explicit-path-pe2tope1] next sid label 48241 type adjacency
    [*PE2-explicit-path-pe2tope1] quit
    [*PE2] interface tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface loopback 1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.1
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe2tope1
    [*PE2-Tunnel1] mpls te reserved-for-binding
    [*PE2-Tunnel1] quit
    [*PE2] commit

    The next sid label command uses the adjacency labels from PE2 to the P and from the P to PE1, which are dynamically generated using IS-IS. This adjacency label can be obtained using the display segment-routing adjacency mpls forwarding command.

    [~PE1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------------
    48240     GE2/0/0           10.2.1.1         ISIS-V4     ---       1500      _public_      

    After completing the configurations, run the display mpls te tunnel-interface command to check that the tunnel interface is Up.

    The following example uses the command output on PE1.

    [~PE1] display mpls te tunnel-interface
        Tunnel Name       : Tunnel1
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP
        Traffic Switch      : - 
        Session ID          : 1
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 3.3.3.3
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 1
        Tie-Breaking Policy : None                  Metric Type  : TE
        Bfd Cap             : None                  
        Reopt               : Disabled              Reopt Freq   : -              
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : -
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0
        Primary Tunnel      : -                     Pri Tunn Sum : -
        Backup Tunnel       : -
        Group Status        : Up                    Oam Status   : None
        IPTN InLabel        : -                     Tunnel BFD Status : -
        BackUp LSP Type     : None                  BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : No                    LSP Control Status : Local control
        Path Verification : No
        Entropy Label     : -
        Associated Tunnel Group ID: -             Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                 Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : No
        Color             : - 
    
        Primary LSP ID      : 1.1.1.1:7
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : pe1tope2                         Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : EXPLICIT
        Create Modify LSP Reason: -

  4. Configure an EVPN L3VPN instance on each PE and bind an interface to the EVPN L3VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv6-family
    [*PE1-vpn-instance-vpn1-af-ipv6] route-distinguisher 100:1
    [*PE1-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
    [*PE1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
    [*PE1-vpn-instance-vpn1-af-ipv6] quit
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] interface GigabitEthernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpn1
    [*PE1-GigabitEthernet1/0/0] ipv6 enable
    [*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:1::1 64
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] ipv6-family
    [*PE2-vpn-instance-vpn1-af-ipv6] route-distinguisher 100:1
    [*PE2-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
    [*PE2-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
    [*PE2-vpn-instance-vpn1-af-ipv6] quit
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] interface GigabitEthernet 1/0/0
    [*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpn1
    [*PE2-GigabitEthernet1/0/0] ipv6 enable
    [*PE2-GigabitEthernet1/0/0] ipv6 address 2001:DB8:2::1 64
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  5. Configure and apply a tunnel policy so that EVPN routes can recurse to SR-MPLS TE tunnels.

    # Configure PE1.

    [~PE1] tunnel-policy srte
    [*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
    [*PE1-tunnel-policy-srte] quit
    [*PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv6-family
    [*PE1-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
    [*PE1-vpn-instance-vpn1-af-ipv6] quit
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy srte
    [*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
    [*PE2-tunnel-policy-srte] quit
    [*PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] ipv6-family
    [*PE2-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
    [*PE2-vpn-instance-vpn1-af-ipv6] quit
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] commit

  6. Establish BGP EVPN peer relationships between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.3 as-number 100
    [*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.3 enable
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] ipv6-family vpn-instance vpn1
    [*PE1-bgp-6-vpn1] import-route direct
    [*PE1-bgp-6-vpn1] advertise l2vpn evpn
    [*PE1-bgp-6-vpn1] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.1 enable
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] ipv6-family vpn-instance vpn1
    [*PE2-bgp-6-vpn1] import-route direct
    [*PE2-bgp-6-vpn1] advertise l2vpn evpn
    [*PE2-bgp-6-vpn1] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After completing the configurations, run the display bgp evpn peer command. The command output shows that BGP peer relationships have been established between PEs and are in the Established state. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
     
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.3         4         100        9        9     0 00:00:02 Established        5

  7. Establish VPN BGP peer relationships between PEs and CEs.

    # Configure EBGP on PE1.

    [~PE1] bgp 100
    [*PE1-bgp] ipv6-family vpn-instance vpn1
    [*PE1-bgp-6-vpn1] peer 2001:DB8:1::2 as-number 65410
    [*PE1-bgp-6-vpn1] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure EBGP on CE1.

    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ipv6 enable
    [*CE1-LoopBack1] ipv6 address 2001:DB8:4::4 128
    [*CE1-LoopBack1] quit
    [*CE1] interface GigabitEthernet 1/0/0
    [*CE1-GigabitEthernet1/0/0] ipv6 enable
    [*CE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:1::2 64
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] router-id 4.4.4.4
    [*CE1-bgp] peer 2001:db8:1::1 as-number 100
    [*CE1-bgp] ipv6-family unicast
    [*CE1-bgp-af-ipv6] peer 2001:db8:1::1 enable
    [*CE1-bgp-af-ipv6] import-route direct
    [*CE1-bgp-af-ipv6] quit
    [*CE1-bgp] quit
    [*CE1] commit

    The configurations of PE2 and CE2 are similar to those of PE1 and CE1, respectively. For configuration details, see Configuration Files in this section.

  8. Verify the configuration.

    Run the display bgp evpn all routing-table command on a PE. The command output shows EVPN routes received from the peer PE. The following example uses the command output on PE1.

    [~PE1] display bgp evpn all routing-table
    
     Local AS number : 100
    
     BGP Local router ID is 1.1.1.1
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
     
     EVPN address family:
     Number of Ip Prefix Routes: 5
     Route Distinguisher: 100:1
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop
     *>    0:[2001:DB8:1::]:64                                    ::
     *                                                            ::2001:DB8:1::2
     *>i   0:[2001:DB8:2::]:64                                    ::FFFF:3.3.3.3
     *>    0:[2001:DB8:4::4]:128                                  ::2001:DB8:1::2
     *>i   0:[2001:DB8:5::5]:128                                  ::FFFF:3.3.3.3  
    [~PE1] display bgp evpn all routing-table prefix-route 0:[2001:DB8:5::5]:128
    
     BGP local router ID : 1.1.1.1                                                                                                     
     Local AS number : 100                                                                                                              
     Total routes of Route Distinguisher(100:1): 1                                                                                      
     BGP routing table entry information of 0:[2001:DB8:5::5]:128:                                                                      
     Label information (Received/Applied): 48120/NULL                                                                                   
     From: 3.3.3.3 (3.3.3.3)                                                                                                           
     Route Duration: 0d00h06m08s                                                                                                        
     Relay IP Nexthop: 10.1.1.2                                                                                                         
     Relay IP Out-Interface: GigabitEthernet2/0/0                                                                                               
     Relay Tunnel Out-Interface: GigabitEthernet2/0/0                                                                                          
     Original nexthop: ::FFFF:3.3.3.3                                                                                                   
     Qos information : 0x0                                                                                                              
     Ext-Community: RT <1 : 1>                                                                                                          
     AS-path 65420, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20            
     Route Type: 5 (Ip Prefix Route)                                                                                                    
     Ethernet Tag ID: 0, IPv6 Prefix/Len: 2001:DB8:5::5/128, ESI: 0000.0000.0000.0000.0000, GW IPv6 Address: ::                         
     Not advertised to any peer yet 

    Run the display ipv6 routing-table vpn-instance vpn1 command on the PEs to view the VPN instance routing table information. The following example uses the command output on PE1.

    [~PE1] display ipv6 routing-table vpn-instance vpn1   
    Routing Table : vpn1                                                                                                                
             Destinations : 6        Routes : 6                                                                                         
    
    Destination  : 2001:DB8:1::                            PrefixLength : 64                                                            
    NextHop      : 2001:DB8:1::1                           Preference   : 0                                                             
    Cost         : 0                                       Protocol     : Direct                                                        
    RelayNextHop : ::                                      TunnelID     : 0x0                                                           
    Interface    : GigabitEthernet1/0/0                    Flags        : D                                                             
    
    Destination  : 2001:DB8:1::1                           PrefixLength : 128                                                           
    NextHop      : ::1                                     Preference   : 0                                                             
    Cost         : 0                                       Protocol     : Direct                                                        
    RelayNextHop : ::                                      TunnelID     : 0x0                                                           
    Interface    : GigabitEthernet1/0/0                    Flags        : D                                                             
    
    Destination  : 2001:DB8:2::                            PrefixLength : 64                                                            
    NextHop      : ::FFFF:3.3.3.3                          Preference   : 255                                                           
    Cost         : 0                                       Protocol     : IBGP                                                          
    RelayNextHop : ::FFFF:0.0.0.0                          TunnelID     : 0x000000000300000001                                          
    Interface    : Tunnel1                                 Flags        : RD                                                            
    
    Destination  : 2001:DB8:4::4                           PrefixLength : 128                                                           
    NextHop      : 2001:DB8:1::2                           Preference   : 255 
    Cost         : 0                                       Protocol     : EBGP                                                          
    RelayNextHop : 2001:DB8:1::2                           TunnelID     : 0x0                                                           
    Interface    : GigabitEthernet1/0/0                    Flags        : RD                                                            
    
    Destination  : 2001:DB8:5::5                           PrefixLength : 128                                                           
    NextHop      : ::FFFF:3.3.3.3                          Preference   : 255                                                           
    Cost         : 0                                       Protocol     : IBGP                                                          
    RelayNextHop : ::FFFF:0.0.0.0                          TunnelID     : 0x000000000300000001                                          
    Interface    : Tunnel1                                 Flags        : RD                                                            
    
    Destination  : FE80::                                  PrefixLength : 10                                                            
    NextHop      : ::                                      Preference   : 0                                                             
    Cost         : 0                                       Protocol     : Direct                                                        
    RelayNextHop : ::                                      TunnelID     : 0x0                                                           
    Interface    : NULL0                                   Flags        : DB 

    Run the display ipv6 routing-table brief command on the CEs to view the IPv6 routing table information. The following example uses the command output on CE1.

    [~CE1] display ipv6 routing-table brief   
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route                                              
    ------------------------------------------------------------------------------                                                      
    Routing Table : _public_                                                                                                            
             Destinations : 9        Routes : 9                                                                                         
    Format :                                                                                                                            
    Destination/Mask                             Protocol                                                                               
    Nexthop                                      Interface                                                                              
    ------------------------------------------------------------------------------                                                      
     ::1/128                                     Direct                                                                                 
      ::1                                        InLoopBack0                                                                            
     ::FFFF:127.0.0.0/104                        Direct                                                                                 
      ::FFFF:127.0.0.1                           InLoopBack0                                                                            
     ::FFFF:127.0.0.1/128                        Direct                                                                                 
      ::1                                        InLoopBack0                                                                            
     2001:DB8:1::/64                             Direct                                                                                 
      2001:DB8:1::2                              GigabitEthernet1/0/0                                                                          
     2001:DB8:1::2/128                           Direct                                                                                 
      ::1                                        GigabitEthernet1/0/0                                                                          
     2001:DB8:2::/64                             EBGP                                                                                   
      2001:DB8:1::1                              GigabitEthernet1/0/0                                                                          
     2001:DB8:4::4/128                           Direct                                                                                 
      ::1                                        LoopBack1                                                                              
     2001:DB8:5::5/128                           EBGP 
      2001:DB8:1::1                              GigabitEthernet1/0/0                                                                          
     FE80::/10                                   Direct                                                                                 
      ::                                         NULL0 

    The CEs can ping each other. For example, CE1 can ping CE2 (2001:DB8:5::5).

    [~CE1] ping ipv6 2001:DB8:5::5                                                                                                       
      PING 2001:DB8:5::5 : 56  data bytes, press CTRL_C to break                                                                        
        Reply from 2001:DB8:5::5                                                                                                        
        bytes=56 Sequence=1 hop limit=61 time=385 ms                                                                                    
        Reply from 2001:DB8:5::5                                                                                                        
        bytes=56 Sequence=2 hop limit=61 time=24 ms                                                                                     
        Reply from 2001:DB8:5::5                                                                                                        
        bytes=56 Sequence=3 hop limit=61 time=25 ms                                                                                     
        Reply from 2001:DB8:5::5                                                                                                        
        bytes=56 Sequence=4 hop limit=61 time=24 ms                                                                                     
        Reply from 2001:DB8:5::5                                                                                                        
        bytes=56 Sequence=5 hop limit=61 time=18 ms                                                                                     
    
      --- 2001:DB8:5::5 ping statistics---                                                                                              
        5 packet(s) transmitted                                                                                                         
        5 packet(s) received                                                                                                            
        0.00% packet loss                                                                                                               
        round-trip min/avg/max=18/95/385 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 100:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
      tnl-policy srte evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 1.1.1.1
    #               
    mpls
     mpls te
    #
    explicit-path pe1tope2
     next sid label 48140 type adjacency index 1
     next sid label 48141 type adjacency index 2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:1::1/64
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153700 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe1tope2  
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.3 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
      peer 2001:DB8:1::2 as-number 65410
     #
     l2vpn-family evpn
      policy vpn-target
      peer 3.3.3.3 enable
    #
    tunnel-policy srte
     tunnel binding destination 3.3.3.3 te Tunnel1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153710 
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 100:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
      tnl-policy srte evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 3.3.3.3
    #               
    mpls
     mpls te
    #
    explicit-path pe2tope1
     next sid label 48240 type adjacency index 1
     next sid label 48241 type adjacency index 2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:2::1/64
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153720 
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te reserved-for-binding
     mpls te tunnel-id 1
     mpls te path explicit-path pe2tope1  
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
      peer 2001:DB8:2::2 as-number 65420
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
    #
    tunnel-policy srte
     tunnel binding destination 1.1.1.1 te Tunnel1
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:1::2/64
    #
    interface LoopBack1
     ipv6 enable    
     ipv6 address 2001:DB8:4::4/128
    #
    bgp 65410
     router-id 4.4.4.4
     peer 2001:DB8:1::1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
     #
     ipv6-family unicast
      undo synchronization
      import-route direct
      peer 2001:DB8:1::1 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:2::2/64
    #               
    interface LoopBack1
     ipv6 enable
     ipv6 address 2001:DB8:5::5/128
    #
    bgp 65420
     router-id 5.5.5.5
     peer 2001:DB8:2::1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
     #
     ipv6-family unicast
      undo synchronization
      import-route direct
      peer 2001:DB8:2::1 enable
    #
    return

Example for Configuring EVPN L3VPN HoVPN over SR-MPLS TE

This section provides an example for configuring EVPN L3VPN HoVPN over SR-MPLS TE to achieve network connectivity.

Networking Requirements

At present, an IP transport network uses L2VPN and L3VPN (HVPN) to carry Layer 2 and Layer 3 services, respectively. These protocols are complex. EVPN can carry both Layer 2 and Layer 3 services. To simplify service transport protocols, many IP transport networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3 services, needs to evolve to EVPN L3VPN HVPN. On the network shown in Figure 1-2672, the UPE and SPE are connected at the access layer, and the SPE and NPE are connected at the aggregation layer. Before EVPN L3VPN HoVPN is deployed to implement E2E interworking, IGP is deployed separately at the access and aggregation layers for communication at each layer. In an EVPN L3VPN HoVPN scenario, the UPE does not have specific routes to the NPE and can only send service data to the SPE over default routes. In this way, route isolation is implemented. An HoVPN can use devices with relatively poor route management capabilities as UPEs, reducing network deployment costs.

Figure 1-2672 EVPN L3VPN HoVPN over SR-MPLS TE networking

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Precautions

During the configuration, note the following:

  • Using the local loopback address of each PE as the source address of the PE is recommended.

  • This example uses an explicit path with specified adjacency SIDs to establish the SR-MPLS TE tunnel. Adjacency SIDs that are dynamically generated may change after a device restart, meaning that they need to be reconfigured if adjacency SIDs are specified for the involved explicit path and the involved device is restarted. To facilitate the use of explicit paths, you are advised to run the ipv4 adjacency command to manually configure adjacency SIDs for such paths.
  • In this example, prefix SIDs are configured on involved loopback interfaces to generate an SR-MPLS BE path, which functions as the best-effort path when the SR-MPLS TE tunnel fails.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Deploy IGP between the UPE and SPE and also between the SPE and NPE. IS-IS is used in this example.

  2. Configure SR-MPLS TE on the UPE, SPE, P, and NPE.

  3. Create VPN instances on the UPE, SPE, and NPE.

  4. Bind access-side interfaces to the VPN instances on the UPE and NPE.

  5. Configure a default static VPN route on the SPE.

  6. Configure a route-policy on the NPE to prevent the NPE from receiving default routes.

  7. Establish a BGP EVPN peer relationship between the UPE and SPE and also between the SPE and NPE. In addition, perform configuration on the SPE to specify the UPE.

  8. Establish an EBGP peer relationship between CE1 and the UPE and also between CE2 and the NPE.
  9. Configure a tunnel policy on each PE to allow VPN routes to preferentially select SR-MPLS TE tunnels for forwarding.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of the UPE, SPE, P, and NPE: 1.1.1.9, 2.2.2.9, 3.3.3.9, and 4.4.4.9, respectively

  • EVPN instance name (vpn1) and RDs (100:1, 200:1, and 300:1) on the UPE, SPE, and NPE

  • VPN target of the EVPN instance: 2:2

Procedure

  1. Configure IP addresses (including loopback interface addresses) for the UPE, SPE, P, and NPE.

    # Configure the UPE.

    <HUAWEI> system-view
    [~HUAWEI] sysname UPE
    [*HUAWEI] commit
    [~UPE] interface loopback 1
    [*UPE-LoopBack1] ip address 1.1.1.9 32
    [*UPE-LoopBack1] quit
    [*UPE] interface gigabitethernet2/0/0
    [*UPE-GigabitEthernet2/0/0] ip address 10.0.1.1 24
    [*UPE-GigabitEthernet2/0/0] quit
    [*UPE] commit

    # Configure the SPE.

    <HUAWEI> system-view
    [~HUAWEI] sysname SPE
    [*HUAWEI] commit
    [~SPE] interface loopback 1
    [*SPE-LoopBack1] ip address 2.2.2.9 32
    [*SPE-LoopBack1] quit
    [~SPE] interface loopback 10
    [*SPE-LoopBack10] ip address 2.2.2.10 32
    [*SPE-LoopBack10] quit
    [*SPE] interface gigabitethernet1/0/0
    [*SPE-GigabitEthernet1/0/0] ip address 10.0.1.2 24
    [*SPE-GigabitEthernet1/0/0] quit
    [*SPE] interface gigabitethernet2/0/0
    [*SPE-GigabitEthernet2/0/0] ip address 10.1.1.1 24
    [*SPE-GigabitEthernet2/0/0] quit
    [*SPE] commit

    # Configure the P.

    <HUAWEI> system-view
    [~HUAWEI] sysname P
    [*HUAWEI] commit
    [~P] interface loopback 1
    [*P-LoopBack1] ip address 3.3.3.9 32
    [*P-LoopBack1] quit
    [*P] interface gigabitethernet1/0/0
    [*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface gigabitethernet2/0/0
    [*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure the NPE.

    <HUAWEI> system-view
    [~HUAWEI] sysname NPE
    [*HUAWEI] commit
    [~NPE] interface loopback 1
    [*NPE-LoopBack1] ip address 4.4.4.9 32
    [*NPE-LoopBack1] quit
    [*NPE] interface gigabitethernet2/0/0
    [*NPE-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*NPE-GigabitEthernet2/0/0] quit
    [*NPE] commit

  2. Deploy IGP. This example uses IS-IS. The UPE and SPE belong to IS-IS process 100, and the SPE, P, and NPE belong to IS-IS process 1.

    # Configure the UPE.

    [~UPE] isis 100
    [*UPE-isis-100] is-level level-1
    [*UPE-isis-100] network-entity 00.1111.1111.0000.00
    [*UPE-isis-100] quit
    [*UPE] interface loopback 1
    [*UPE-LoopBack1] isis enable 100
    [*UPE-LoopBack1] quit
    [*UPE] interface GigabitEthernet 2/0/0
    [*UPE-GigabitEthernet2/0/0] isis enable 100
    [*UPE-GigabitEthernet2/0/0] quit
    [*UPE] commit

    # Configure the SPE.

    [~SPE] isis 1
    [*SPE-isis-1] is-level level-2
    [*SPE-isis-1] network-entity 00.1111.1111.1111.00
    [*SPE-isis-1] quit
    [*SPE] interface loopback 1
    [*SPE-LoopBack1] isis enable 1
    [*SPE-LoopBack1] quit
    [*SPE] interface GigabitEthernet 2/0/0
    [*SPE-GigabitEthernet2/0/0] isis enable 1
    [*SPE-GigabitEthernet2/0/0] quit
    [*SPE] isis 100
    [*SPE-isis-100] is-level level-1
    [*SPE-isis-100] network-entity 00.1111.1111.0001.00
    [*SPE-isis-100] quit
    [*SPE] interface loopback 10
    [*SPE-LoopBack10] isis enable 100
    [*SPE-LoopBack10] quit
    [*SPE] interface GigabitEthernet 1/0/0
    [*SPE-GigabitEthernet1/0/0] isis enable 100
    [*SPE-GigabitEthernet1/0/0] quit
    [*SPE] commit

    # Configure the P.

    [~P] isis 1
    [*P-isis-1] is-level level-2
    [*P-isis-1] network-entity 00.1111.1111.2222.00
    [*P-isis-1] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis enable 1
    [*P-LoopBack1] quit
    [*P] interface GigabitEthernet 1/0/0
    [*P-GigabitEthernet1/0/0] isis enable 1
    [*P-GigabitEthernet1/0/0] quit
    [*P] interface GigabitEthernet 2/0/0
    [*P-GigabitEthernet2/0/0] isis enable 1
    [*P-GigabitEthernet2/0/0] quit
    [*P] commit

    # Configure the NPE.

    [~NPE] isis 1
    [*NPE-isis-1] is-level level-2
    [*NPE-isis-1] network-entity 00.1111.1111.3333.00
    [*NPE-isis-1] quit
    [*NPE] interface loopback 1
    [*NPE-LoopBack1] isis enable 1
    [*NPE-LoopBack1] quit
    [*NPE] interface GigabitEthernet 2/0/0
    [*NPE-GigabitEthernet2/0/0] isis enable 1
    [*NPE-GigabitEthernet2/0/0] quit
    [*NPE] commit

    After the configuration is complete, run the display isis route command. The command output shows that routes are learned properly. The following example uses the command output on the SPE.

    [~SPE] display isis 1 route 
    
                             Route information for ISIS(1)      
                             -----------------------------      
    
                            ISIS(1) Level-2 Forwarding Table    
                            --------------------------------    
    
    IPV4 Destination   IntCost    ExtCost ExitInterface     NextHop         Flags     
    -------------------------------------------------------------------------------   
    2.2.2.9/32         0          NULL    Loop1             Direct          D/-/L/-   
    3.3.3.9/32         10         NULL    GE2/0/0           10.1.1.2        A/-/-/-   
    4.4.4.9/32         20         NULL    GE2/0/0           10.1.1.2        A/-/-/-   
    10.1.1.0/24        10         NULL    GE2/0/0           Direct          D/-/L/-   
    10.2.1.0/24        20         NULL    GE2/0/0           10.1.1.2        A/-/-/-   
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut,       
                U-Up/Down Bit Set, LP-Local Prefix-Sid          
         Protect Type: L-Link Protect, N-Node Protect 
    [~SPE] display isis 100 route                                
    
                             Route information for ISIS(100)    
                             -----------------------------      
    
                            ISIS(100) Level-1 Forwarding Table  
                            --------------------------------    
    
    IPV4 Destination   IntCost    ExtCost ExitInterface     NextHop         Flags     
    -------------------------------------------------------------------------------   
    1.1.1.9/32         10         NULL    GE1/0/0           10.0.1.1        A/-/-/-   
    2.2.2.10/32        0          NULL    Loop10            Direct          D/-/L/-   
    10.0.1.0/24        10         NULL    GE1/0/0           Direct          D/-/L/-   
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut,       
                U-Up/Down Bit Set, LP-Local Prefix-Sid          
         Protect Type: L-Link Protect, N-Node Protect 

    The preceding command output shows that the routing information of IS-IS process 1 is isolated from that of IS-IS process 100.

    Run the display ip routing-table command. The command output shows information about the IP routing table. The following example uses the command output on the SPE.

    [~SPE] display ip routing-table  
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route       
    ------------------------------------------------------------------------------ 
    Routing Table : _public_  
             Destinations : 16       Routes : 16                                   
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface   
    
            1.1.1.9/32  ISIS-L1 15   10            D   10.0.1.1        GigabitEthernet1/0/0 
            2.2.2.9/32  Direct  0    0             D   127.0.0.1       LoopBack1   
           2.2.2.10/32  Direct  0    0             D   127.0.0.1       LoopBack10  
            3.3.3.9/32  ISIS-L2 15   10            D   10.1.1.2        GigabitEthernet2/0/0 
            4.4.4.9/32  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0 
           10.0.1.0/24  Direct  0    0             D   10.0.1.2        GigabitEthernet1/0/0 
           10.0.1.2/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
         10.0.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        GigabitEthernet2/0/0 
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0 
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet2/0/0 
           10.2.1.0/24  ISIS-L2 15   20            D   10.1.1.2        GigabitEthernet2/0/0 
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0 
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

  3. Configure SR-MPLS TE tunnel functions between the SPE and NPE.

    # Configure the SPE.

    [~SPE] mpls lsr-id 2.2.2.9
    [*SPE] mpls
    [*SPE-mpls] mpls te
    [*SPE-mpls] quit
    [*SPE] segment-routing
    [*SPE-segment-routing] quit
    [*SPE] isis 1
    [*SPE-isis-1] cost-style wide
    [*SPE-isis-1] traffic-eng level-2
    [*SPE-isis-1] segment-routing mpls
    [*SPE-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*SPE-isis-1] quit
    [*SPE] segment-routing
    [*SPE-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
    [*SPE-segment-routing] quit
    [*SPE] interface loopback 1
    [*SPE-LoopBack1] isis prefix-sid absolute 153711
    [*SPE-LoopBack1] quit
    [*SPE] explicit-path p1
    [*SPE-explicit-path-p1] next sid label 330121 type adjacency
    [*SPE-explicit-path-p1] next sid label 330120 type adjacency
    [*SPE-explicit-path-p1] quit
    [*SPE] interface tunnel1
    [*SPE-Tunnel1] ip address unnumbered interface loopback 1
    [*SPE-Tunnel1] tunnel-protocol mpls te
    [*SPE-Tunnel1] destination 4.4.4.9
    [*SPE-Tunnel1] mpls te tunnel-id 1
    [*SPE-Tunnel1] mpls te signal-protocol segment-routing
    [*SPE-Tunnel1] mpls te path explicit-path p1
    [*SPE-Tunnel1] quit
    [*SPE] commit

    # Configure the P.

    [~P] mpls lsr-id 3.3.3.9
    [*P] mpls
    [*P-mpls]  mpls te
    [*P-mpls] quit
    [*P] segment-routing
    [*P-segment-routing] quit
    [*P] isis 1
    [*P-isis-1] cost-style wide
    [*P-isis-1] traffic-eng level-2
    [*P-isis-1] segment-routing mpls
    [*P-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P-isis-1] quit
    [*P] segment-routing
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
    [*P-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    [*P-segment-routing] quit
    [*P] interface loopback 1
    [*P-LoopBack1] isis prefix-sid absolute 153721
    [*P-LoopBack1] quit
    [*P] commit

    # Configure the NPE.

    [~NPE] mpls lsr-id 4.4.4.9
    [*NPE] mpls
    [*NPE-mpls] mpls te
    [*NPE-mpls] quit
    [*NPE] segment-routing
    [*NPE-segment-routing] quit
    [*NPE] isis 1
    [*NPE-isis-1] cost-style wide
    [*NPE-isis-1] traffic-eng level-2
    [*NPE-isis-1] segment-routing mpls
    [*NPE-isis-1] segment-routing global-block 153616 153800

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*NPE-isis-1] quit
    [*NPE] segment-routing
    [*NPE-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    [*NPE-segment-routing] quit
    [*NPE] interface loopback 1
    [*NPE-LoopBack1] isis prefix-sid absolute 153731
    [*NPE-LoopBack1] quit
    [*NPE] explicit-path p1
    [*NPE-explicit-path-p1] next sid label 330220 type adjacency
    [*NPE-explicit-path-p1] next sid label 330221 type adjacency
    [*NPE-explicit-path-p1] quit
    [*NPE] interface tunnel1
    [*NPE-Tunnel1] ip address unnumbered interface loopback 1
    [*NPE-Tunnel1] tunnel-protocol mpls te
    [*NPE-Tunnel1] destination 2.2.2.9
    [*NPE-Tunnel1] mpls te tunnel-id 1
    [*NPE-Tunnel1] mpls te signal-protocol segment-routing
    [*NPE-Tunnel1] mpls te path explicit-path p1
    [*NPE-Tunnel1] quit
    [*NPE] commit

    After the configuration is complete, run the display mpls te tunnel-interface command to check the tunnel interface status. The command output shows that the status is up.

    The following example uses the command output on the SPE.

    [~SPE] display mpls te tunnel-interface Tunnel 1  
        Tunnel Name       : Tunnel1           
        Signalled Tunnel Name: -              
        Tunnel State Desc : CR-LSP is Up      
        Tunnel Attributes   :                 
        Active LSP          : Primary LSP     
        Traffic Switch      : -               
        Session ID          : 1               
        Ingress LSR ID      : 2.2.2.9               Egress LSR ID: 4.4.4.9        
        Admin State         : UP                    Oper State   : UP             
        Signaling Protocol  : Segment-Routing 
        FTid                : 1               
        Tie-Breaking Policy : None                  Metric Type  : TE             
        Bfd Cap             : None            
        Reopt               : Disabled              Reopt Freq   : -              
        Auto BW             : Disabled              Threshold    : -              
        Current Collected BW: -                     Auto BW Freq : -              
        Min BW              : -                     Max BW       : -              
        Offload             : Disabled              Offload Freq : -              
        Low Value           : -                     High Value   : -              
        Readjust Value      : -               
        Offload Explicit Path Name: -         
        Tunnel Group        : Primary         
        Interfaces Protected: -  
        Excluded IP Address : -               
        Referred LSP Count  : 0               
        Primary Tunnel      : -                     Pri Tunn Sum : -              
        Backup Tunnel       : -               
        Group Status        : Up                    Oam Status   : None           
        IPTN InLabel        : -                     Tunnel BFD Status : -         
        BackUp LSP Type     : None                  BestEffort   : -              
        Secondary HopLimit  : -               
        BestEffort HopLimit  : -              
        Secondary Explicit Path Name: -       
        Secondary Affinity Prop/Mask: 0x0/0x0 
        BestEffort Affinity Prop/Mask: -      
        IsConfigLspConstraint: -              
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled   
        Hot-Standby Switch State:  CLEAR      
        Bit Error Detection:  Disabled        
        Bit Error Detection Switch Threshold:  -                                  
        Bit Error Detection Resume Threshold:  -                                  
        Ip-Prefix Name    : -                 
        P2p-Template Name : - 
        PCE Delegate      : No                     LSP Control Status : Local control            
        Path Verification : No                
        Entropy Label     : -                 
        Associated Tunnel Group ID: -              Associated Tunnel Group Type: - 
        Auto BW Remain Time   : -                  Reopt Remain Time     : -      
        Segment-Routing Remote Label   : -    
        Binding Sid       : -                     Reverse Binding Sid : -         
        FRR Attr Source   : -                     Is FRR degrade down : -         
        Color             : -                 
    
        Primary LSP ID      : 2.2.2.9:1       
        LSP State           : UP                    LSP Type     : Primary        
        Configured Attribute Information:     
        Setup Priority      : 7                     Hold Priority: 7              
        IncludeAll          : 0x0             
        IncludeAny          : 0x0             
        ExcludeAny          : 0x0             
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE 
        SidProtectType      : -            
        Actual Attribute Information:         
        Setup Priority      : 7                     Hold Priority: 7              
        IncludeAll          : 0x0             
        IncludeAny          : 0x0 
        ExcludeAny          : 0x0             
        Configured Bandwidth Information:     
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0       
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0       
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0       
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0       
        Actual Bandwidth Information:         
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0       
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0       
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0       
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0       
        Explicit Path Name  : p1                               Hop Limit: -       
        Record Route        : -                     Record Label : -              
        Route Pinning       : -               
        FRR Flag            : -               
        IdleTime Remain     : -               
        BFD Status          : -               
        Soft Preemption     : -               
        Reroute Flag        : -               
        Pce Flag            : Normal          
        Path Setup Type     : EXPLICIT        
        Create Modify LSP Reason: - 

  4. Configure SR-MPLS TE tunnel functions between the UPE and SPE.

    # Configure the UPE.

    [~UPE] mpls lsr-id 1.1.1.9
    [*UPE] mpls
    [*UPE-mpls] mpls te
    [*UPE-mpls] quit
    [*UPE] segment-routing
    [*UPE-segment-routing] quit
    [*UPE] isis 100
    [*UPE-isis-100] cost-style wide
    [*UPE-isis-100] traffic-eng level-1
    [*UPE-isis-100] segment-routing mpls
    [*UPE-isis-100] segment-routing global-block 153801 154000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*UPE-isis-100] quit
    [*UPE] segment-routing
    [*UPE-segment-routing] ipv4 adjacency local-ip-addr 10.0.1.1 remote-ip-addr 10.0.1.2 sid 330222
    [*UPE-segment-routing] quit
    [*UPE] interface loopback 10
    [*UPE-LoopBack10] isis prefix-sid absolute 153900
    [*UPE-LoopBack10] quit
    [*UPE] explicit-path p2
    [*UPE-explicit-path-p2] next sid label 330122 type adjacency
    [*UPE-explicit-path-p2] quit
    [*UPE] interface tunnel10
    [*UPE-Tunnel10] ip address unnumbered interface loopback 1
    [*UPE-Tunnel10] tunnel-protocol mpls te
    [*UPE-Tunnel10] destination 2.2.2.9
    [*UPE-Tunnel10] mpls te tunnel-id 10
    [*UPE-Tunnel10] mpls te signal-protocol segment-routing
    [*UPE-Tunnel10] mpls te path explicit-path pe2tope1
    [*UPE-Tunnel10] quit
    [*UPE] commit

    # Configure the SPE.

    [~SPE] isis 100
    [~SPE-isis-100] cost-style wide
    [*SPE-isis-100] traffic-eng level-1
    [*SPE-isis-100] segment-routing mpls
    [*SPE-isis-100] segment-routing global-block 153801 154000

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*SPE-isis-1] quit
    [*SPE] segment-routing
    [*SPE-segment-routing] ipv4 adjacency local-ip-addr 10.0.1.2 remote-ip-addr 10.0.1.1 sid 330222
    [*SPE-segment-routing] quit
    [*SPE] interface loopback 10
    [*SPE-LoopBack10] isis prefix-sid absolute 153910
    [*SPE-LoopBack10] quit
    [*SPE] explicit-path p2
    [*SPE-explicit-path-p2] next sid label 330222 type adjacency
    [*SPE-explicit-path-p2] quit
    [*SPE] interface tunnel10
    [*SPE-Tunnel10] ip address unnumbered interface loopback 1
    [*SPE-Tunnel10] tunnel-protocol mpls te
    [*SPE-Tunnel10] destination 1.1.1.9
    [*SPE-Tunnel10] mpls te tunnel-id 10
    [*SPE-Tunnel10] mpls te signal-protocol segment-routing
    [*SPE-Tunnel10] mpls te path explicit-path p2
    [*SPE-Tunnel10] quit
    [*SPE] commit

    After the configuration is complete, run the display mpls te tunnel-interface command to check the tunnel interface status. The command output shows that the status is up.

    Run the display tunnel-info all command to check information about all tunnels, including SR-MPLS TE and SR-MPLS BE tunnels, which are respectively indicated by sr-te and srbe-lsp in the command output.

    The following example uses the command output on the SPE.

    [~SPE] display tunnel-info all   
    Tunnel ID            Type                Destination            Status
    -----------------------------------------------------------------------
    0x000000000300000001 sr-te               4.4.4.9                UP 
    0x000000000300000002 sr-te               1.1.1.9                UP 
    0x000000002900000006 srbe-lsp            1.1.1.9                UP 
    0x000000002900000008 srbe-lsp            3.3.3.9                UP 
    0x000000002900000009 srbe-lsp            4.4.4.9                UP

  5. Create VPN instances on the UPE, SPE, and NPE.

    # Configure the UPE.

    [~UPE] ip vpn-instance vpn1
    [*UPE-vpn-instance-vpn1] ipv4-family
    [*UPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
    [*UPE-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 both evpn
    [*UPE-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
    [*UPE-vpn-instance-vpn1-af-ipv4] quit
    [*UPE-vpn-instance-vpn1] quit
    [*UPE] commit

    # Configure the SPE.

    [~SPE] ip vpn-instance vpn1
    [*SPE-vpn-instance-vpn1] ipv4-family
    [*SPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:1
    [*SPE-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 both evpn
    [*SPE-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
    [*SPE-vpn-instance-vpn1-af-ipv4] quit
    [*SPE-vpn-instance-vpn1] quit
    [*SPE] commit

    # Configure the NPE.

    [~NPE] ip vpn-instance vpn1
    [*NPE-vpn-instance-vpn1] ipv4-family
    [*NPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 300:1
    [*NPE-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 both evpn
    [*NPE-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
    [*NPE-vpn-instance-vpn1-af-ipv4] quit
    [*NPE-vpn-instance-vpn1] quit
    [*NPE] commit

  6. Bind access-side interfaces to the VPN instances on the UPE and NPE.

    # Configure the UPE.

    [~UPE] interface GigabitEthernet 2/0/0
    [*UPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
    [*UPE-GigabitEthernet2/0/0] ip address 172.16.1.2 255.255.255.0
    [*UPE-GigabitEthernet2/0/0] quit
    [*UPE] commit

    # Configure the NPE.

    [~NPE] interface GigabitEthernet 2/0/0
    [*NPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
    [*NPE-GigabitEthernet2/0/0] ip address 172.17.1.2 255.255.255.0
    [*NPE-GigabitEthernet2/0/0] quit
    [*NPE] commit

  7. Configure a default static route on the SPE.

    [~SPE] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
    [*SPE] commit

  8. Configure a route-policy on the NPE to prevent the NPE from receiving default routes.

    [~NPE] ip ip-prefix default index 10 permit 0.0.0.0 0
    [*NPE] route-policy SPE deny node 10
    [*NPE-route-policy] if-match ip-prefix default
    [*NPE-route-policy] quit
    [*NPE] route-policy SPE permit node 20
    [*NPE-route-policy] quit
    [*NPE] ip vpn-instance vpn1
    [*NPE-vpn-instance-vpn1] ipv4-family
    [*NPE-vpn-instance-vpn1-af-ipv4] import route-policy SPE evpn
    [*NPE-vpn-instance-vpn1-af-ipv4] quit
    [*NPE-vpn-instance-vpn1] quit
    [*NPE] commit

  9. Establish a BGP EVPN peer relationship between the UPE and SPE and also between the SPE and NPE. In addition, perform configuration on the SPE to specify the UPE.

    # Configure the UPE.

    [~UPE] bgp 100
    [*UPE-bgp] peer 2.2.2.10 as-number 100
    [*UPE-bgp] peer 2.2.2.10 connect-interface LoopBack1
    [*UPE-bgp] l2vpn-family evpn
    [*UPE-bgp-af-evpn] peer 2.2.2.10 enable
    [*UPE-bgp-af-evpn] quit
    [*UPE-bgp] ipv4-family vpn-instance vpn1
    [*UPE-bgp-vpn1] advertise l2vpn evpn
    [*UPE-bgp-vpn1] import-route direct
    [*UPE-bgp-vpn1] quit
    [*UPE-bgp] quit
    [*UPE] commit

    # Configure the SPE.

    [~SPE] bgp 100
    [*SPE-bgp] peer 1.1.1.9 as-number 100
    [*SPE-bgp] peer 1.1.1.9 connect-interface LoopBack10
    [*SPE-bgp] peer 4.4.4.9 as-number 100
    [*SPE-bgp] peer 4.4.4.9 connect-interface LoopBack1
    [*SPE-bgp] l2vpn-family evpn
    [*SPE-bgp-af-evpn] undo policy vpn-target
    [*SPE-bgp-af-evpn] peer 1.1.1.9 enable
    [*SPE-bgp-af-evpn] peer 1.1.1.9 upe
    [*SPE-bgp-af-evpn] peer 4.4.4.9 enable
    [*SPE-bgp-af-evpn] quit
    [*SPE-bgp] ipv4-family vpn-instance vpn1
    [*SPE-bgp-vpn1] advertise l2vpn evpn
    [*SPE-bgp-vpn1] network 0.0.0.0 0
    [*SPE-bgp-vpn1] quit
    [*SPE-bgp] quit
    [*SPE] commit

    # Configure the NPE.

    [~NPE] bgp 100
    [*NPE-bgp] peer 2.2.2.9 as-number 100
    [*NPE-bgp] peer 2.2.2.9 connect-interface LoopBack1
    [*NPE-bgp] l2vpn-family evpn
    [*NPE-bgp-af-evpn] peer 2.2.2.9 enable
    [*NPE-bgp-af-evpn] quit
    [*NPE-bgp] ipv4-family vpn-instance vpn1
    [*NPE-bgp-vpn1] advertise l2vpn evpn
    [*NPE-bgp-vpn1] import-route direct
    [*NPE-bgp-vpn1] quit
    [*NPE-bgp] quit
    [*NPE] commit

  10. Establish an EBGP peer relationship between CE1 and the UPE and also between CE2 and the NPE.

    # Configure CE1.

    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 172.16.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    # Configure the UPE.

    [~UPE] bgp 100
    [*UPE-bgp] ipv4-family vpn-instance vpn1
    [*UPE-bgp-vpn1] peer 172.16.1.1 as-number 65410
    [*UPE-bgp-vpn1] commit
    [~UPE-bgp-vpn1] quit

    # Configure CE2.

    [~CE2] interface loopback 1
    [*CE2-LoopBack1] ip address 10.22.1.1 32
    [*CE2-LoopBack1] quit
    [*CE2] interface gigabitethernet1/0/0
    [*CE2-GigabitEthernet1/0/0] ip address 172.17.1.1 24
    [*CE2-GigabitEthernet1/0/0] quit
    [*CE2] bgp 65420
    [*CE2-bgp] peer 172.17.1.2 as-number 100
    [*CE2-bgp] network 10.22.1.1 32
    [*CE2-bgp] quit
    [*CE2] commit

    # Configure the NPE.

    [~NPE] bgp 100
    [*NPE-bgp] ipv4-family vpn-instance vpn1
    [*NPE-bgp-vpn1] peer 172.17.1.1 as-number 65420
    [*NPE-bgp-vpn1] commit
    [~NPE-bgp-vpn1] quit

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on each PE to check whether a BGP peer relationship has been established between CE1 and the UPE and also between CE2 and the NPE. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully.

    The following example uses the peer relationship between CE1 and the UPE.

    [~UPE] display bgp vpnv4 vpn-instance vpn1 peer
     
     BGP local router ID : 10.0.1.1     
     Local AS number : 100
    
     VPN-Instance vpn1, Router ID 10.0.1.1:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer                             V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv 
      172.16.1.1                       4       65410        5        6     0 00:01:15 Established        1

  11. Configure a tunnel policy on each PE to allow VPN routes to preferentially select SR-MPLS TE tunnels for forwarding.

    # Configure the UPE.

    [~UPE] tunnel-policy p1
    [*UPE-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
    [*UPE-tunnel-policy-p1] quit
    [*UPE] ip vpn-instance vpn1 
    [*UPE-vpn-instance-vpn1] tnl-policy p1 evpn
    [*UPE-vpn-instance-vpn1] quit
    [*UPE] commit

    # Configure the SPE.

    [~SPE] tunnel-policy p1
    [*SPE-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
    [*SPE-tunnel-policy-p1] quit

    [*SPE] ip vpn-instance vpn1 
    [*SPE-vpn-instance-vpn1] tnl-policy p1 evpn
    [*SPE-vpn-instance-vpn1] quit
    [*SPE] commit

    # Configure the NPE.

    [~NPE] tunnel-policy p1
    [*NPE-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
    [*NPE-tunnel-policy-p1] quit
    [*NPE] ip vpn-instance vpn1 
    [*NPE-vpn-instance-vpn1] tnl-policy p1 evpn
    [*NPE-vpn-instance-vpn1] quit
    [*NPE] commit

    Run the display bgp evpn all routing-table command on the NPE. The command output shows EVPN routes received from the UPE.

    [~NPE] display bgp evpn all routing-table
    
     Local AS number : 100      
    
     BGP Local router ID is 10.2.1.2                                      
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,          
                   h - history,  i - internal, s - suppressed, S - Stale  
                   Origin : i - IGP, e - EGP, ? - incomplete              
    
    
     EVPN address family:       
     Number of Ip Prefix Routes: 5                                        
     Route Distinguisher: 100:1 
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop 
     *>i   0:172.16.1.0:24                                        2.2.2.9 
     *>i   0:10.11.1.1:32                                         2.2.2.9 
     Route Distinguisher: 300:1 
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop 
     *>    0:172.17.1.0:24                                        0.0.0.0 
     *                                                            172.17.1.1                  
     *>    0:10.22.2.2:32                                         172.17.1.1

    Run the display ip routing-table vpn-instance command on the NPE. The command output shows that routes have been added to the IP routing table of the corresponding VPN instance.

    [~NPE] display ip routing-table vpn-instance vpn1 
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route                                              
    ------------------------------------------------------------------------------   
    Routing Table : vpn1          
             Destinations : 8        Routes : 8                                      
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface     
    
          10.11.1.1/32  IBGP    255  0             RD  2.2.2.9         Tunnel1       
          10.22.2.2/32  EBGP    255  0             RD  172.17.1.1      GigabitEthernet1/0/0 
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0   
         172.16.1.0/24  IBGP    255  0             RD  2.2.2.9         Tunnel1       
         172.17.1.0/24  Direct  0    0             D   172.17.1.2      GigabitEthernet1/0/0 
         172.17.1.2/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
       172.17.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

    Run the display bgp evpn all routing-table command on the UPE. The command output shows the default EVPN route received from the SPE.

    [~UPE] display bgp evpn all routing-table
    
     Local AS number : 100        
    
     BGP Local router ID is 10.1.1.10    
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path, 
                   h - history,  i - internal, s - suppressed, S - Stale             
                   Origin : i - IGP, e - EGP, ? - incomplete                         
    
    
     EVPN address family:         
     Number of Ip Prefix Routes: 4       
     Route Distinguisher: 100:1   
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop            
     *>    0:172.16.1.0:24                                        0.0.0.0  
     *                                                            172.16.1.1          
     *>    0:10.11.1.1:32                                         172.16.1.1         
     Route Distinguisher: 200:1   
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop            
     *>i   0:0.0.0.0:0                                            2.2.2.10 

    Run the display ip routing-table vpn-instance command on the UPE. The command output shows that routes have been added to the IP routing table of the corresponding VPN instance.

    [~UPE] display ip routing-table vpn-instance vpn1 
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route                                              
    ------------------------------------------------------------------------------   
    Routing Table : vpn1          
             Destinations : 7        Routes : 7                                      
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface     
    
            0.0.0.0/0   IBGP    255  0             RD  2.2.2.10        Tunnel10      
          10.11.1.1/32  EBGP    255  0             RD  172.16.1.1      GigabitEthernet1/0/0
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0   
         172.16.1.0/24  Direct  0    0             D   172.16.1.2      GigabitEthernet1/0/0 
         172.16.1.2/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
       172.16.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0 
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

    Note that there is a default route (0.0.0.0/0) in the IP routing table of the VPN instance on the UPE, and the recursion outbound interface of the route is Tunnel10.

  12. Verify the configuration.

    Run the display ip routing-table command on the CE to check the routes to the peer CE.

    The command output on CE1 shows that the route on CE1 is the default route (0.0.0.0/0), not any specific route to CE2.

    [~CE1] display ip routing-table  
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route          
    ------------------------------------------------------------------------------ 
    Routing Table : _public_  
             Destinations : 9        Routes : 9                                    
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface   
    
            0.0.0.0/0   EBGP    255  0             RD  172.16.1.2      GigabitEthernet1/0/0     
          10.11.1.1/32  Direct  0    0             D   127.0.0.1       LoopBack1   
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0 
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
         172.16.1.0/24  Direct  0    0             D   172.16.1.1      GigabitEthernet1/0/0     
         172.16.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0     
       172.16.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0     
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0 

    The command output on CE2 shows that CE2 has a specific route (10.11.1.1/32) to CE1.

    [~CE2] display ip routing-table   
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route          
    ------------------------------------------------------------------------------ 
    Routing Table : _public_  
             Destinations : 10       Routes : 10                                   
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface   
    
          10.11.1.1/32  EBGP    255  0             RD  172.17.1.2      GigabitEthernet1/0/0     
          10.22.2.2/32  Direct  0    0             D   127.0.0.1       LoopBack1   
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0 
          127.0.0.1/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
    127.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0 
         172.16.1.0/24  EBGP    255  0             RD  172.17.1.2      GigabitEthernet1/0/0     
         172.17.1.0/24  Direct  0    0             D   172.17.1.1      GigabitEthernet1/0/0     
         172.17.1.1/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0     
       172.17.1.255/32  Direct  0    0             D   127.0.0.1       GigabitEthernet1/0/0     
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

    The CEs can ping each other. For example, CE1 can ping CE2 (10.22.2.2).

    [~CE1] ping 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break   
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=48 ms     
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=36 ms     
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=32 ms     
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=30 ms     
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=35 ms     
    
      --- 10.22.2.2 ping statistics ---                       
        5 packet(s) transmitted                               
        5 packet(s) received                                  
        0.00% packet loss                                     
        round-trip min/avg/max = 30/36/48 ms

Configuration Files

  • UPE configuration file
    #
    sysname UPE
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 100:1
      apply-label per-instance
      vpn-target 2:2 export-extcommunity evpn
      vpn-target 2:2 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #               
    mpls lsr-id 1.1.1.9
    #
    mpls
     mpls te
    #
    explicit-path p2
     next sid label 330122 type adjacency index 1
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.0.1.1 remote-ip-addr 10.0.1.2 sid 330122
    #
    isis 100
     is-level level-1
     cost-style wide
     network-entity 00.1111.1111.0000.00
     traffic-eng level-1
     segment-routing mpls
     segment-routing global-block 153801 154000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip binding vpn-instance vpn1
     ip address 172.16.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.0.1.1 255.255.255.0
     isis enable 100
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 100
     isis prefix-sid absolute 153900
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 2.2.2.10
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 10
     mpls te path explicit-path p2
    #
    bgp 100
     peer 2.2.2.10 as-number 100
     peer 2.2.2.10 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.10 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      peer 172.16.1.1 as-number 65410
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.10 enable
    #
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 1
    #
    return
  • SPE configuration file

    #
    sysname SPE
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 200:1
      apply-label per-instance
      vpn-target 2:2 export-extcommunity evpn
      vpn-target 2:2 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
     mpls te
    #
    explicit-path p1
     next sid label 330121 type adjacency index 1
     next sid label 330120 type adjacency index 2
    #
    explicit-path p2
     next sid label 330222 type adjacency index 1
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330121
     ipv4 adjacency local-ip-addr 10.0.1.2 remote-ip-addr 10.0.1.1 sid 330222
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    isis 100
     is-level level-1
     cost-style wide
     network-entity 00.1111.1111.0001.00
     traffic-eng level-1
     segment-routing mpls
     segment-routing global-block 153801 154000
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.0.1.2 255.255.255.0
     isis enable 100
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153711
    #               
    interface LoopBack10
     ip address 2.2.2.10 255.255.255.255
     isis enable 100
     isis prefix-sid absolute 153910
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path p1 
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack10
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 10
     mpls te path explicit-path p2 
    #
    bgp 100
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack10
     peer 4.4.4.9 as-number 100
     peer 4.4.4.9 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
      peer 4.4.4.9 enable
     #
     ipv4-family vpn-instance vpn1
      network 0.0.0.0
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 upe
      peer 4.4.4.9 enable
    #
    ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
    #
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 1
    #
    return
  • P configuration file

    #
    sysname P
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
     mpls te
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330221
     ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330120
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.2222.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153721
    #
    return
  • NPE configuration file

    #
    sysname NPE
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 300:1
      import route-policy SPE evpn
      apply-label per-instance
      vpn-target 2:2 export-extcommunity evpn
      vpn-target 2:2 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 4.4.4.9
    #
    mpls
     mpls te
    #
    explicit-path p1
     next sid label 330220 type adjacency index 1
     next sid label 330221 type adjacency index 2
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330220
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.1111.1111.3333.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 153616 153800
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip binding vpn-instance vpn1
     ip address 172.17.1.2 255.255.255.0
    #               
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 153731
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 2.2.2.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path p1 
    #
    bgp 100
     private-4-byte-as enable
     peer 2.2.2.9 as-number 100
     peer 2.2.2.9 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.9 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
      peer 172.17.1.1 as-number 65420
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.9 enable
    #
    ip ip-prefix default index 10 permit 0.0.0.0 0
    #
    route-policy SPE deny node 10
     if-match ip-prefix default
    #
    route-policy SPE permit node 20
    #
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 1
    #
    return
  • CE1 configuration file
    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 172.16.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 172.16.1.2 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      network 10.11.1.1 255.255.255.255
      peer 172.16.1.2 enable
    #
    return
  • CE2 configuration file
    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 172.17.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 172.17.1.2 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      network 10.22.2.2 255.255.255.255
      peer 172.17.1.2 enable
    #
    return

Example for Configuring the Controller to Run NETCONF to Deliver Configurations to Create an SR-MPLS TE Tunnel

This section provides an example for configuring the controller to run NETCONF to deliver configurations to create an SR-MPLS TE tunnel.

Networking Requirements

On the network shown in Figure 1-2673, a customer wants to establish a tunnel and an LSP from PE1 to PE2. The SR protocol is used for path generation and data forwarding, with PE1 functioning as the ingress, and PE2 the egress. IS-IS neighbor relationships need to be established between PEs and Ps.
  • IS-IS assigns labels to each neighbor and collects network topology information. P1 runs BGP-LS to collect topology information and reports the information to the controller.
  • The controller computes a path based on the received information and delivers the corresponding path information to the ingress PE1 through PCEP.
  • The controller sends the tunnel configuration information to the ingress node PE1 through NETCONF.
  • The ingress node PE1 uses the delivered tunnel configurations and label stacks to establish an SR-MPLS TE tunnel. PE1 delegates the tunnel to the controller through PCE.
Figure 1-2673 Example for configuring the controller to run NETCONF to deliver configurations to create an SR-MPLS TE tunnel

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address and its mask to every interface and configure a loopback interface address as an MPLS LSR-ID on every node.

  2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each LSR.

  3. Enable SR globally on each node.

  4. Configure a label allocation mode and a topology information collection mode. In this example, the forwarders assign labels.

  5. Configure the PCC and SR on each forwarder.

  6. Configure the PCE server on the controller.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of interfaces, as shown in Figure 1-2673

  • IS-IS process ID: 1; IS-IS system ID of each node: converted from the loopback 0 address; IS-IS level: level-2

  • BGP-LS peer relationship between the controller and P1, as shown in Figure 1-2673.

Procedure

  1. Assign an IP address and a mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 0
    [*PE1-LoopBack0] ip address 1.1.1.1 32
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.2.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 0
    [*P1-LoopBack0] ip address 2.2.2.2 32
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.1.2.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.1.3.2 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] ip address 10.2.1.1 24
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 0
    [*PE2-LoopBack0] ip address 3.3.3.3 32
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.1.3.1 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure IS-IS to advertise the route to each network segment to which each interface is connected and to advertise the host route to each loopback address that is used as an LSR ID.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 10.0000.0000.0002.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 0
    [*PE1-LoopBack0] isis enable 1
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] network-entity 10.0000.0000.0003.00
    [*P1-isis-1] quit
    [*P1] interface loopback 0
    [*P1-LoopBack0] isis enable 1
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] isis enable 1
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 10.0000.0000.0004.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 0
    [*PE2-LoopBack0] isis enable 1
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  3. Configure PCE on the forwarders and controller. For configuration details, see Configuration Files in this section.
  4. Configure basic MPLS functions and enable MPLS TE.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls te
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    The configurations on P1 and PE2 are the same as the configuration on PE1. The configuration details are not provided.

  5. Enable SR globally on each node.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] commit

    The configurations on P1 and PE2 are the same as the configuration on PE1. The configuration details are not provided.

  6. Configure a label allocation mode and a topology information collection mode. In this example, the forwarders assign labels.

    • Enable IS-IS SR-MPLS TE.

      [~PE1] isis 1
      [~PE1-isis-1] cost-style wide
      [*PE1-isis-1] traffic-eng level-2
      [*PE1-isis-1] segment-routing mpls
      [*PE1-isis-1] bgp-ls enable level-2
      [*PE1-isis-1] commit
      [~PE1-isis-1] quit

      The configurations on P1 and PE2 are the same as the configuration on PE1. The configuration details are not provided.

    • Configure the BGP-LS route advertisement capability on P1.

      # Enable BGP-LS on P1 and establish a BGP-LS peer relationship with the controller.

      [~P1] bgp 100
      [*P1-bgp] peer 10.2.1.2 as-number 100
      [*P1-bgp] link-state-family unicast
      [*P1-bgp-af-ls] peer 10.2.1.2 enable
      [*P1-bgp-af-ls] commit
      [~P1-bgp-af-ls] quit
      [~P1-bgp] quit

      # Enable BGP-LS on the controller and establish a BGP-LS peer relationship with P1.

      [~Controller] bgp 100
      [*Controller-bgp] peer 10.2.1.1 as-number 100
      [*Controller-bgp] link-state-family unicast
      [*Controller-bgp-af-ls] peer 10.2.1.1 enable
      [*Controller-bgp-af-ls] commit
      [~Controller-bgp-af-ls] quit
      [~Controller-bgp] quit

  7. The controller sends the tunnel configuration information to PE1 through NETCONF.

    The detailed tunnel configuration delivered by the controller through NETCONF is as follows:

    [~PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 0
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te pce delegate
    [*PE1-Tunnel1] quit
    [*PE1] commit

  8. Verify the configuration.

    Run the display mpls te tunnel command on PE1 to view SR-MPLS TE tunnel information.

    [~PE1] display mpls te tunnel
    * means the LSP is detour LSP
    -------------------------------------------------------------------------------
    Ingress LsrId   Destination     LSPID In/OutLabel     R Tunnel-name
    -------------------------------------------------------------------------------
    1.1.1.1         3.3.3.3         21    -/330000        I Tunnel1
    -------------------------------------------------------------------------------
    R: Role, I: Ingress, T: Transit, E: Egress

    Run the display mpls te tunnel path command on PE1 to view path information on the SR-MPLS TE tunnel.

    [~PE1] display mpls te tunnel path
     Tunnel Interface Name : Tunnel1
     Lsp ID : 1.1.1.1 :1 :21
     Hop Information
      Hop 0 Label 330002 NAI 10.1.2.1:10.1.2.2
      Hop 1 Label 330002 NAI 10.1.3.2:10.1.3.1

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    pce-client
     capability segment-routing
     connect-server 10.2.1.2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     bgp-ls enable level-2
     network-entity 10.0000.0000.0002.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
     interface Tunnel1
     ip address unnumbered interface LoopBack0
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te pce delegate
    # 
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     bgp-ls enable level-2
     network-entity 10.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.3.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack0
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    # 
    bgp 100
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      undo synchronization 
      peer 10.2.1.2 enable
     #
     link-state-family unicast
      peer 10.2.1.2 enable
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     bgp-ls enable level-2
     network-entity 10.0000.0000.0004.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.3.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0 
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    return
  • Controller configuration file

    #
    sysname Controller
    #
    pce-server
     source-address 10.2.1.2
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
    #
    bgp 100
     peer 10.2.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      peer 10.2.1.1 enable
     #
     link-state-family unicast
      peer 10.2.1.1 enable
    #
    return

Example for Configuring Static BFD for SR-MPLS TE

This section provides an example for configuring static BFD for SR-MPLS TE to implement rapid traffic switching if a tunnel fault occurs.

Networking Requirements

On the network shown in Figure 1-2674, a tunnel and an LSP need to be established from PE1 to PE2. The SR protocol is used for path generation and data forwarding. PE1 and PE2 are the path's ingress and egress, respectively. P1 collects network topology information and reports the information to the controller using BGP-LS. The controller calculates an LSP based on the collected topology information and delivers the path information to a third-party adapter. The third-party adapter then sends the path information to PE1.

You do not need to configure a PCE client (PCC) because the third-party adapter delivers the path information.

If a Huawei device connects to a non-Huawei device that does not support BFD, configure U-BFD to detect links.

Figure 1-2674 Configuring static BFD for SR-MPLS TE

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 1/0/1, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address and a mask to each interface, and configure a loopback address as an MPLS LSR ID on each node.

  2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each LSR.

  3. Enable SR globally on each node.

  4. Configure IS-IS TE on each node.

  5. Establish a BGP-LS peer relationship between P1 and the controller so that P1 can report network topology information to the controller using BGP-LS.

  6. Configure a tunnel interface on the ingress PE1, and specify an IP address, tunneling protocol, destination IP address, and tunnel bandwidth.

  7. Configure a BFD session on PE1 to monitor the primary SR-MPLS TE tunnel.

Data Preparation

To complete the configuration, you need the following data:

  • IP address of each interface, as shown in Figure 1-2674

  • IS-IS process ID: 1; IS-IS system ID of each node: converted from the loopback 0 address; IS-IS level: level-2

  • BGP-LS peer relationship between P1 and the controller, as shown in Figure 1-2674

  • Name of a BFD session

  • Local and remote discriminators of the BFD session

Procedure

  1. Assign an IP address and a mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 0
    [*PE1-LoopBack0] ip address 10.21.2.9 32
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.23.2 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 0
    [*P1-LoopBack0] ip address 10.31.2.9 32
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.1.23.3 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.20.34.3 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet1/0/1
    [*P1-GigabitEthernet1/0/1] ip address 10.7.2.10 24
    [*P1-GigabitEthernet1/0/1] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 0
    [*PE2-LoopBack0] ip address 10.41.2.9 32
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.20.34.4 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  2. Configure IS-IS to advertise the route to each network segment of each interface and to advertise the host route to each loopback address (used as an LSR ID).

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 11.1111.1111.1111.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 0
    [*PE1-LoopBack0] isis enable 1
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] network-entity 11.2222.2222.2222.00
    [*P1-isis-1] quit
    [*P1] interface loopback 0
    [*P1-LoopBack0] isis enable 1
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 11.3333.3333.3333.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 0
    [*PE2-LoopBack0] isis enable 1
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  3. Establish a BGP-LS peer relationship between P1 and the controller.

    Establish a BGP-LS peer relationship between P1 and the controller so that P1 can report network topology information to the controller using BGP-LS. This example uses the configuration of P1. For controller configuration details, see Configuration Files in this section.

    [~P1] isis 1
    [*P1-isis-1] bgp-ls enable level-2
    [*P1-isis-1] quit
    [*P1] bgp 100
    [*P1-bgp] peer 10.7.2.9 as-number 100
    [*P1-bgp] link-state-family unicast
    [*P1-bgp-af-ls] peer 10.7.2.9 enable
    [*P1-bgp-af-ls] quit
    [*P1-bgp] quit
    [*P1] commit

  4. Configure basic MPLS functions and enable MPLS TE.

    # Configure PE1.

    [~PE1] mpls lsr-id 10.21.2.9
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls te
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    The configurations except the LSR ID configuration on P1 and PE2 are the same as those on PE1.

  5. Enable SR globally on each node.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] commit

    The configurations on P1 and PE2 are similar to the configuration on PE1.

  6. Configure IS-IS TE on each node.

    # Configure PE1.

    [~PE1] isis 1
    [~PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] commit
    [~PE1-isis-1] quit

    The configuration on P1 and PE2 is the same as that on PE1.

  7. Configure a tunnel interface on PEs.

    # Configure PE1.

    [~PE1] interface Tunnel10
    [*PE1-Tunnel10] ip address unnumbered interface loopback 0
    [*PE1-Tunnel10] tunnel-protocol mpls te
    [*PE1-Tunnel10] destination 10.41.2.9
    [*PE1-Tunnel10] mpls te tunnel-id 1
    [*PE1-Tunnel10] mpls te signal-protocol segment-routing
    [*PE1-Tunnel10] commit
    [~PE1-Tunnel10] quit

    # Configure PE2.

    [~PE2] interface Tunnel10
    [*PE2-Tunnel10] ip address unnumbered interface loopback 0
    [*PE2-Tunnel10] tunnel-protocol mpls te
    [*PE2-Tunnel10] destination 10.21.2.9
    [*PE2-Tunnel10] mpls te tunnel-id 2
    [*PE2-Tunnel10] mpls te signal-protocol segment-routing
    [*PE2-Tunnel10] commit
    [~PE2-Tunnel10] quit

  8. Verify the configuration.

    After completing the configuration, run the display interface tunnel command on PE1. You can check that the tunnel interface is Up.

    Run the display mpls te tunnel command on each node to check MPLS TE tunnel establishment.

    [~PE1] display mpls te tunnel
    * means the LSP is detour LSP
    ------------------------------------------------------------------------------
    Ingress LsrId    Destination      LSPID   In/Out Label     R  Tunnel-name
    ------------------------------------------------------------------------------
    10.21.2.9        10.41.2.9        1       --/20            I  Tunnel10
    -------------------------------------------------------------------------------
    R: Role, I: Ingress, T: Transit, E: Egress
    [~PE2] display mpls te tunnel
    * means the LSP is detour LSP
    ------------------------------------------------------------------------------
    Ingress LsrId    Destination      LSPID   In/Out Label     R  Tunnel-name
    ------------------------------------------------------------------------------
    10.41.2.9        10.21.2.9        1       --/120           I  Tunnel10 
    -------------------------------------------------------------------------------
    R: Role, I: Ingress, T: Transit, E: Egress

  9. Configure BFD for SR-MPLS TE.

    # Configure a BFD session on PE1 to detect the SR-MPLS TE tunnel, and set the minimum intervals at which BFD packets are sent and received.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] bfd pe1tope2 bind mpls-te interface Tunnel10
    [*PE1-bfd-lsp-session-pe1tope2] discriminator local 12
    [*PE1-bfd-lsp-session-pe1tope2] discriminator remote 21
    [*PE1-bfd-lsp-session-pe1tope2] min-tx-interval 100
    [*PE1-bfd-lsp-session-pe1tope2] min-rx-interval 100
    [*PE1-bfd-lsp-session-pe1tope2] commit
    [~PE1-bfd-lsp-session-pe1tope2] quit

    # Configure a BFD session on PE2 to detect the reverse SR-MPLS TE tunnel, and set the minimum intervals at which BFD packets are sent and received.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] bfd pe2tope1 bind mpls-te interface Tunnel10
    [*PE2-bfd-lsp-session-pe2tope1] discriminator local 21
    [*PE2-bfd-lsp-session-pe1tope2] discriminator remote 12
    [*PE2-bfd-lsp-session-pe2tope1] min-tx-interval 100
    [*PE2-bfd-lsp-session-pe2tope1] min-rx-interval 100
    [*PE2-bfd-lsp-session-pe2tope1] commit
    [~PE2-bfd-lsp-session-pe2tope1] quit

    # After completing the configuration, run the display bfd session { all | discriminator discr-value | mpls-te interface interface-type interface-number } [ verbose ] command on PE1 and PE2. You can check that the BFD session is Up.

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    bfd
    #
    mpls lsr-id 10.21.2.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 11.1111.1111.1111.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.23.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0
     ip address 10.21.2.9 255.255.255.255
     isis enable 1
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack0
     tunnel-protocol mpls te
     destination 10.41.2.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
    #
    bfd pe1tope2 bind mpls-te interface Tunnel10
     discriminator local 12
     discriminator remote 21
     min-tx-interval 100
     min-rx-interval 100
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 10.31.2.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     bgp-ls enable level-2
     network-entity 11.2222.2222.2222.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.23.3 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet1/0/1
     undo shutdown
     ip address 10.7.2.10 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.20.34.3 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0
     ip address 10.31.2.9 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 10.7.2.9 as-number 100
     #               
     ipv4-family unicast 
      undo synchronization 
      peer 10.7.2.9 enable
     # 
     link-state-family unicast 
      peer 10.7.2.9 enable
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    bfd
    #
    mpls lsr-id 10.41.2.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 11.3333.3333.3333.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.20.34.4 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0 
     ip address 10.41.2.9 255.255.255.255
     isis enable 1
    #
    interface Tunnel10
     ip address unnumbered interface LoopBack0
     tunnel-protocol mpls te
     destination 10.21.2.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 2
    #
    bfd pe2tope1 bind mpls-te interface Tunnel10
     discriminator local 21
     discriminator remote 12
     min-tx-interval 100
     min-rx-interval 100
    #
    return

Example for Configuring Dynamic BFD for SR-MPLS TE LSP

This section provides an example for configuring dynamic BFD for SR-MPLS TE LSP, which rapidly detects SR-MPLS TE LSP failures to protect traffic transmitted over the LSPs.

Networking Requirements

On the network shown in Figure 1-2675, a tunnel as well as primary and backup LSPs for the tunnel need to be established from PE1 to PE2. The SR protocol is used for path generation and data forwarding. PE2 collects network topology information and reports the information to the controller using BGP-LS. The controller uses the information to calculate the primary and backup LSPs and delivers LSP information to a third-party adapter, and the third-party adapter forwards the LSP information to the ingress PE1.

Hot standby is enabled for the tunnel. If the primary LSP fails, traffic is switched to the backup LSP. After the primary LSP recovers, traffic is switched back.

You do not need to configure a PCE client (PCC) because the third-party adapter delivers the path information.

If a Huawei device connects to a non-Huawei device that does not support BFD, configure U-BFD to detect links.

Figure 1-2675 Networking diagram for configuring dynamic BFD for SR-MPLS TE LSP

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address and a mask to each interface, and configure a loopback address as an MPLS LSR ID on each node.

  2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each LSR.

  3. Enable SR globally on each node.

  4. Configure a label allocation mode and a topology information collection mode. In this example, the controller collects assigns labels to forwarders.

  5. Establish a BGP-LS peer relationship between PE2 and the controller so that PE2 can report network topology information to the controller using BGP-LS.

  6. Configure a tunnel interface on the ingress PE1, and specify an IP address, tunneling protocol, destination IP address, and tunnel bandwidth.

  7. Configure CR-LSP hot standby.

  8. Enable BFD on the ingress, configure BFD for MPLS TE, and set the minimum intervals at which BFD packets are sent and received and the local detection multiplier

  9. Enable the egress to passively create a BFD session.

Data Preparation

To complete the configuration, you need the following data:

  • IP address of each interface, as shown in Figure 1-2675

  • IS-IS process ID: 1; IS-IS system ID of each node: converted from the loopback 0 address; IS-IS level: level-2

  • BGP-LS peer relationship between the controller and PE2, as shown in Figure 1-2675

  • Name of a BFD session

  • Local and remote discriminators of the BFD session

Procedure

  1. Assign an IP address and a mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 0
    [*PE1-LoopBack0] ip address 1.1.1.1 32
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.2.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 0
    [*P1-LoopBack0] ip address 2.2.2.2 32
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.1.2.2 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] ip address 10.1.3.2 24
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 0
    [*PE2-LoopBack0] ip address 3.3.3.3 32
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.1.3.1 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

  2. Configure IS-IS to advertise the route to each network segment to which each interface is connected and to advertise the host route to each loopback address that is used as an LSR ID.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 10.0000.0000.0002.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 0
    [*PE1-LoopBack0] isis enable 1
    [*PE1-LoopBack0] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] network-entity 10.0000.0000.0003.00
    [*P1-isis-1] quit
    [*P1] interface loopback 0
    [*P1-LoopBack0] isis enable 1
    [*P1-LoopBack0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] isis enable 1
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 10.0000.0000.0004.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 0
    [*PE2-LoopBack0] isis enable 1
    [*PE2-LoopBack0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

  3. Establish a BGP-LS peer relationship between the controller and PE2.

    Establish a BGP-LS peer relationship between the controller and PE2 so that PE2 can report network topology information to the controller using BGP-LS. This example uses the configuration of PE2. For controller configuration details, see Configuration Files.

    [~PE2] isis 1
    [*PE2-isis-1] bgp-ls enable level-2
    [*PE2-isis-1] quit
    [*PE2] bgp 100
    [*PE2-bgp] peer 10.2.1.2 as-number 100
    [*PE2-bgp] link-state-family unicast
    [*PE2-bgp-af-ls] peer 10.2.1.2 enable
    [*PE2-bgp-af-ls] quit
    [*PE2-bgp] quit
    [*PE2] commit

  4. Configure basic MPLS functions and enable MPLS TE.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls te
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] mpls
    [*PE1-GigabitEthernet2/0/0] mpls te
    [*PE1-GigabitEthernet2/0/0] commit
    [~PE1-GigabitEthernet2/0/0] quit

    The configurations on P1 and PE2 are similar to the configuration on PE1. For configuration details, see Configuration Files in this section.

  5. Enable SR globally on each node.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] commit

    The configurations on P1 and PE2 are similar to the configuration on PE1. The configuration details are not provided.

  6. Configure a label allocation mode and a topology information collection mode. In this example, the controller collects assigns labels to forwarders.

    # Configure PE1.

    [~PE1] isis 1
    [~PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] commit
    [~PE1-isis-1] quit

    The configurations on P1 and PE2 are similar to the configuration on PE1. The configuration details are not provided.

  7. Configure a tunnel interface and hot standby on the ingress PE1.

    # Configure PE1.

    [~PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 0
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.3
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te pce delegate
    [*PE1-Tunnel1] mpls te backup hot-standby
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit

  8. Verify the configuration.

    After completing the configuration, run the display mpls te tunnel-interface command on PE1. The tunnel interface is Up.

    [~PE1] display mpls te tunnel-interface tunnel1
        Tunnel Name       : Tunnel1
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP                                       
        Traffic Switch      : - 
        Session ID          : 1
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 3.3.3.3
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 1
        Tie-Breaking Policy : None                  Metric Type  : TE
        Bfd Cap             : None                  
        Reopt               : Disabled              Reopt Freq   : -              
        Inter-area Reopt    : Disabled 
        Auto BW             : Disabled              Threshold    : 0 percent
        Current Collected BW: 0 kbps                Auto BW Freq : 0
        Min BW              : 0 kbps                Max BW       : 0 kbps
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary                                              
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0  
        Primary Tunnel      : -                     Pri Tunn Sum : -              
        Backup Tunnel       : -                                                    
        Group Status        : Up                    Oam Status   : None             
        IPTN InLabel        : -                     Tunnel BFD Status : -                               
        BackUp LSP Type     : Hot-Standby           BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : Active                LSP Control Status : Local control
        Path Verification : No
        Entropy Label     : None 
        Associated Tunnel Group ID: -             Associated Tunnel Group Type: -
        Auto BW Remain Time : -                   Reopt Remain Time  : -
        Segment-Routing Remote Label    : -
        Metric Inherit IGP : None
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : No
        Color             : - 
        
        Primary LSP ID      : 1.1.1.1:19
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 10000           CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 10000           CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : -                                Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : PCE
        Create Modify LSP Reason: -
           
        Backup LSP ID       : 1.1.1.1:46945
        IsBestEffortPath    : No
        LSP State           : UP                    LSP Type     : Hot-Standby
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : -                                Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : PCE
        Create Modify LSP Reason: -

    Run the display mpls te tunnel command on PE1 to view SR-MPLS TE tunnel information.

    [~PE1] display mpls te tunnel
    * means the LSP is detour LSP
    -------------------------------------------------------------------------------
    Ingress LsrId   Destination     LSPID In/OutLabel     R Tunnel-name
    -------------------------------------------------------------------------------
    -               -               -     101/101         T lsp
    1.1.1.1         3.3.3.3         21    -/330000        I Tunnel1
    1.1.1.1         3.3.3.3         26    -/330002        I Tunnel1
    -------------------------------------------------------------------------------
    R: Role, I: Ingress, T: Transit, E: Egress

    Run the display mpls te tunnel path command on PE1 to view path information on the SR-MPLS TE tunnel.

    [~PE1] display mpls te tunnel path
    Tunnel Interface Name : Tunnel1
    Lsp ID : 1.1.1.1 :1 :21
    Hop Information
    Hop 0 Label 330000 NAI 10.1.1.2
    
    Tunnel Interface Name : Tunnel1
    Lsp ID : 1.1.1.1 :1 :26
    Hop Information
    Hop 0 Label 330002 NAI 10.1.2.2
    Hop 1 Label 330002 NAI 10.1.3.1

  9. Enable BFD and configure BFD for MPLS TE on the ingress PE1.

    # Enable BFD for MPLS TE on the tunnel interface of PE1. Set the minimum intervals at which BFD packets are sent and received to 100 ms and the local detection multiplier to 3.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] interface tunnel 1
    [*PE1-Tunnel1] mpls te bfd enable
    [*PE1-Tunnel1] mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit

  10. Enable the egress to passively create a BFD session.

    [~PE2] bfd
    [*PE2-bfd] mpls-passive
    [*PE2-bfd] commit
    [~PE2-bfd] quit

    # After completing the configuration, run the display bfd session mpls-te interface Tunnel command on PE1. The BFD session status is Up.

    [~PE1] display bfd session mpls-te interface Tunnel 1 te-lsp
    (w): State in WTR
    (*): State is invalid
    --------------------------------------------------------------------------------
    Local      Remote     PeerIpAddr      State     Type        InterfaceName 
    --------------------------------------------------------------------------------
    16399      16386      3.3.3.3         Up        D_TE_LSP    Tunnel1
    --------------------------------------------------------------------------------

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    bfd
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-2 
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
     interface Tunnel1
     ip address unnumbered interface LoopBack0
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te backup hot-standby
     mpls te tunnel-id 1
     mpls te pce delegate
     mpls te bfd enable
     mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
    # 
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     ip address 10.1.3.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    bfd
     mpls-passive
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     bgp-ls enable level-2
     network-entity 10.0000.0000.0004.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     ip address 10.1.3.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
    #
    interface LoopBack0 
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 10.2.1.2 as-number 100
     #               
     ipv4-family unicast 
      undo synchronization 
      peer 10.2.1.2 enable
     # 
     link-state-family unicast 
      peer 10.2.1.2 enable
    #
    return
  • Controller configuration file

    #
    sysname Controller
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0005.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
    #
    return

Example for Configuring TI-LFA FRR for a Loose SR-MPLS TE Tunnel

Topology-Independent Loop-Free Alternate (TI-LFA) FRR can be configured to enhance the reliability of a Segment Routing (SR) network.

Networking Requirements

On the network shown in Figure 1-2676, IS-IS is enabled. The cost of the link between Device C and Device D is 100, and the cost of other links is 10. Based on node SIDs, an SR-MPLS TE tunnel (DeviceA -> DeviceB -> DeviceE -> DeviceF) is established from DeviceA to DeviceF through static explicit paths.

The SR-MPLS TE tunnel established based on node SIDs is a loose one that supports TI-LFA FRR. TI-LFA FRR can be configured on DeviceB to provide local protection, enabling traffic to be quickly switched to the backup path (DeviceA -> DeviceB -> DeviceC -> DeviceD -> DeviceE -> DeviceF) when the link between DeviceB and DeviceE fails.

Figure 1-2676 TI-LFA FRR for a loose SR-MPLS TE tunnel

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

In this example, TI-LFA FRR is configured on DeviceB to protect the link between DeviceB and DeviceE. On a live network, you are advised to configure TI-LFA FRR on all nodes in the SR domain.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the entire network to implement interworking between devices.

  2. Enable MPLS on the entire network and configure SR.

  3. Configure an explicit path on Device A and establish an SR-MPLS TE tunnel.
  4. Enable TI-LFA FRR and anti-microloop on Device B.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR ID of each device

  • SRGB range of each device

Procedure

  1. Configure interface IP addresses.

    # Configure Device A.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] ip address 1.1.1.9 32
    [*DeviceA-LoopBack1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    Repeat this step for the other devices. For configuration details, see Configuration Files in this section.

  2. Configure an IGP to implement interworking. IS-IS is used as an example.

    # Configure Device A.

    [~DeviceA] isis 1
    [*DeviceA-isis-1] is-level level-1
    [*DeviceA-isis-1] network-entity 10.0000.0000.0001.00
    [*DeviceA-isis-1] cost-style wide
    [*DeviceA-isis-1] quit
    [*DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] isis enable 1
    [*DeviceA-LoopBack1] quit
    [*DeviceA] interface gigabitethernet1/0/0
    [*DeviceA-GigabitEthernet1/0/0] isis enable 1
    [*DeviceA-GigabitEthernet1/0/0] quit
    [*DeviceA] commit

    Repeat this step for the other devices. For configuration details, see Configuration Files in this section.

    Set the cost of the link between DeviceC and DeviceD to 100 to simulate a special network scenario.

    # Configure DeviceC.

    [~DeviceC] interface gigabitethernet2/0/0
    [~DeviceC-GigabitEthernet2/0/0] isis cost 100
    [*DeviceC-GigabitEthernet2/0/0] quit
    [*DeviceC] commit

    # Configure DeviceD.

    [~DeviceD] interface gigabitethernet1/0/0
    [~DeviceD-GigabitEthernet1/0/0] isis cost 100
    [*DeviceD-GigabitEthernet1/0/0] quit
    [*DeviceD] commit

  3. Configure basic MPLS capabilities on the backbone network.

    # Configure Device A.

    [~DeviceA] mpls lsr-id 1.1.1.9
    [*DeviceA] mpls
    [*DeviceA-mpls] mpls te
    [*DeviceA-mpls] commit
    [~DeviceA-mpls] quit

    # Configure Device B.

    [~DeviceB] mpls lsr-id 2.2.2.9
    [*DeviceB] mpls
    [*DeviceB-mpls] mpls te
    [*DeviceB-mpls] commit
    [~DeviceB-mpls] quit

    # Configure Device C.

    [~DeviceC] mpls lsr-id 3.3.3.9
    [*DeviceC] mpls
    [*DeviceC-mpls] mpls te
    [*DeviceC-mpls] commit
    [~DeviceC-mpls] quit

    # Configure Device D.

    [~DeviceD] mpls lsr-id 4.4.4.9
    [*DeviceD] mpls
    [*DeviceD-mpls] mpls te
    [*DeviceD-mpls] commit
    [~DeviceD-mpls] quit

    # Configure Device E.

    [~DeviceE] mpls lsr-id 5.5.5.9
    [*DeviceE] mpls
    [*DeviceE-mpls] mpls te
    [*DeviceE-mpls] commit
    [~DeviceE-mpls] quit

    # Configure Device F.

    [~DeviceF] mpls lsr-id 6.6.6.9
    [*DeviceF] mpls
    [*DeviceF-mpls] mpls te
    [*DeviceF-mpls] commit
    [~DeviceF-mpls] quit

  4. Configure basic SR functions on the backbone network.

    # Configure Device A.

    [~DeviceA] segment-routing
    [*DeviceA-segment-routing] quit
    [*DeviceA] isis 1
    [*DeviceA-isis-1] segment-routing mpls
    [*DeviceA-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceA-isis-1] quit
    [*DeviceA] interface loopback 1
    [*DeviceA-LoopBack1] isis prefix-sid index 10
    [*DeviceA-LoopBack1] quit
    [*DeviceA] commit

    # Configure Device B.

    [~DeviceB] segment-routing
    [*DeviceB-segment-routing] quit
    [*DeviceB] isis 1
    [*DeviceB-isis-1] segment-routing mpls
    [*DeviceB-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceB-isis-1] quit
    [*DeviceB] interface loopback 1
    [*DeviceB-LoopBack1] isis prefix-sid index 20
    [*DeviceB-LoopBack1] quit
    [*DeviceB] commit

    # Configure Device C.

    [~DeviceC] segment-routing
    [*DeviceC-segment-routing] quit
    [*DeviceC] isis 1
    [*DeviceC-isis-1] segment-routing mpls
    [*DeviceC-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceC-isis-1] quit
    [*DeviceC] interface loopback 1
    [*DeviceC-LoopBack1] isis prefix-sid index 30
    [*DeviceC-LoopBack1] quit
    [*DeviceC] commit

    # Configure Device D.

    [~DeviceD] segment-routing
    [*DeviceD-segment-routing] quit
    [*DeviceD] isis 1
    [*DeviceD-isis-1] segment-routing mpls
    [*DeviceD-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceD-isis-1] quit
    [*DeviceD] interface loopback 1
    [*DeviceD-LoopBack1] isis prefix-sid index 40
    [*DeviceD-LoopBack1] quit
    [*DeviceD] commit

    # Configure Device E.

    [~DeviceE] segment-routing
    [*DeviceE-segment-routing] quit
    [*DeviceE] isis 1
    [*DeviceE-isis-1] segment-routing mpls
    [*DeviceE-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceE-isis-1] quit
    [*DeviceE] interface loopback 1
    [*DeviceE-LoopBack1] isis prefix-sid index 50
    [*DeviceE-LoopBack1] quit
    [*DeviceE] commit

    # Configure Device F.

    [~DeviceF] segment-routing
    [*DeviceF-segment-routing] quit
    [*DeviceF] isis 1
    [*DeviceF-isis-1] segment-routing mpls
    [*DeviceF-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*DeviceF-isis-1] quit
    [*DeviceF] interface loopback 1
    [*DeviceF-LoopBack1] isis prefix-sid index 60
    [*DeviceF-LoopBack1] quit
    [*DeviceF] commit

    # After completing the configurations, run the display segment-routing prefix mpls forwarding command on each device. The command output shows that SR-MPLS BE LSPs have been established. The command output on DeviceA is used as an example.

    [~DeviceA] display segment-routing prefix mpls forwarding
                       Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    -----------------------------------------------------------------------------------------------------------------
    1.1.1.9/32         16010      NULL       Loop1             127.0.0.1        E     ---       1500    Active          
    2.2.2.9/32         16020      3          GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    3.3.3.9/32         16030      16030      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    4.4.4.9/32         16040      16040      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    5.5.5.9/32         16050      16050      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    6.6.6.9/32         16060      16060      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    
    Total information(s): 6

  5. Configure an SR-MPLS TE tunnel.

    # Configure Device A.

    [~DeviceA] explicit-path p1
    [*DeviceA-explicit-path-p1] next sid label 16020 type prefix
    [*DeviceA-explicit-path-p1] next sid label 16050 type prefix
    [*DeviceA-explicit-path-p1] next sid label 16060 type prefix
    [*DeviceA-explicit-path-p1] quit
    [*DeviceA] interface tunnel1
    [*DeviceA-Tunnel1] ip address unnumbered interface LoopBack1
    [*DeviceA-Tunnel1] tunnel-protocol mpls te
    [*DeviceA-Tunnel1] destination 6.6.6.9
    [*DeviceA-Tunnel1] mpls te tunnel-id 1
    [*DeviceA-Tunnel1] mpls te signal-protocol segment-routing
    [*DeviceA-Tunnel1] mpls te path explicit-path p1
    [*DeviceA-Tunnel1] commit
    [~DeviceA-Tunnel1] quit

    # After completing the configurations, run the display mpls te tunnel destination command on Device A. The command output shows that the SR-MPLS TE tunnel has been established.

    [~DeviceA] display mpls te tunnel destination 6.6.6.9
    * means the LSP is detour LSP
    -------------------------------------------------------------------------------
    Ingress LsrId   Destination     LSPID In/OutLabel     R Tunnel-name
    -------------------------------------------------------------------------------
    1.1.1.9         6.6.6.9         5     -/16020         I Tunnel1
    -------------------------------------------------------------------------------
    R: Role, I: Ingress, T: Transit, E: Egress

    # Run the display mpls te tunnel-interface command on Device A. The command output shows information about the SR-MPLS TE tunnel.

    [~DeviceA] display mpls te tunnel-interface Tunnel 1
        Tunnel Name       : Tunnel1
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP
        Traffic Switch      : - 
        Session ID          : 1
        Ingress LSR ID      : 1.1.1.9               Egress LSR ID: 6.6.6.9
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 8193
        Tie-Breaking Policy : None                  Metric Type  : TE
        Bfd Cap             : None                  
        Reopt               : Disabled              Reopt Freq   : -   
        Inter-area Reopt    : Disabled             
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : -
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0
        Primary Tunnel      : -                     Pri Tunn Sum : -
        Backup Tunnel       : -
        Group Status        : Up                    Oam Status   : None
        IPTN InLabel        : -                     Tunnel BFD Status : -
        BackUp LSP Type     : None                  BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : No                    LSP Control Status : Local control
        Path Verification : No
        Entropy Label     : -
        Associated Tunnel Group ID: -             Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                 Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -
        Metric Inherit IGP : None
        Binding Sid       : -                     Reverse Binding Sid : - 
        FRR Attr Source   : -                     Is FRR degrade down : -
        Color             : - 
        
        Primary LSP ID      : 1.1.1.9:5
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : p1                               Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : EXPLICIT
        Create Modify LSP Reason: -

  6. Configure TI-LFA FRR.

    # Configure Device B.

    [~DeviceB] isis 1
    [~DeviceB-isis-1] avoid-microloop frr-protected
    [*DeviceB-isis-1] avoid-microloop frr-protected rib-update-delay 5000
    [*DeviceB-isis-1] avoid-microloop segment-routing
    [*DeviceB-isis-1] avoid-microloop segment-routing rib-update-delay 10000
    [*DeviceB-isis-1] frr
    [*DeviceB-isis-1-frr] loop-free-alternate level-1
    [*DeviceB-isis-1-frr] ti-lfa level-1
    [*DeviceB-isis-1-frr] quit
    [*DeviceB-isis-1] quit
    [*DeviceB] commit

    After completing the configurations, run the display isis route [ level-1 | level-2 ] [ process-id ] [ verbose ] command on Device B. The command output shows IS-IS TI-LFA FRR backup entries.

    [~DeviceB] display isis route level-1 verbose
                             Route information for ISIS(1)
                             -----------------------------
    
                            ISIS(1) Level-1 Forwarding Table
                            --------------------------------
    
    
     IPV4 Dest  : 1.1.1.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 06:02:52
     NextHop    :                    Interface :               ExitIndex :
        10.1.1.1                           GE1/0/0                    0x0000000e
     Prefix-sid : 16010              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.1.1.1                           GE1/0/0                    3
    
     IPV4 Dest  : 2.2.2.9/32         Int. Cost : 0             Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : D/-/L/-
     Priority   : -                  Age       : 06:02:52
     NextHop    :                    Interface :               ExitIndex :
        Direct                             Loop1                      0x00000000
     Prefix-sid : 16020              Weight    : 0             Flags     : -/N/-/-/-/-/A/L
     SR NextHop :                    Interface :               OutLabel  :
        Direct                             Loop1                      -
    
     IPV4 Dest  : 3.3.3.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.2.1.2                           GE2/0/0                   0x0000000a
     TI-LFA:        
     Interface  : GE3/0/0 
     NextHop    : 10.5.1.2           LsIndex    : 0x00000002   ProtectType: L
     Backup Label Stack (Top -> Bottom): {16040, 48141}
     Prefix-sid : 16030              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.2.1.2                           GE2/0/0                   3
     TI-LFA:        
     Interface  : GE3/0/0 
     NextHop    : 10.5.1.2           LsIndex    : 0x00000002   ProtectType: L
     Backup Label Stack (Top -> Bottom): {16040, 48141}
    
     IPV4 Dest  : 4.4.4.9/32         Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0 
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: N
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16040              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    16040
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: N
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 5.5.5.9/32         Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16050              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    3
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 6.6.6.9/32         Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 1             Flags     : A/-/-/-
     Priority   : Medium             Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
     Prefix-sid : 16060              Weight    : 0             Flags     : -/N/-/-/-/-/A/-
     SR NextHop :                    Interface :               OutLabel  :
        10.5.1.2                           GE3/0/0                    16060
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 10.1.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 06:02:52
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE1/0/0                    0x00000000
    
     IPV4 Dest  : 10.2.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 06:02:52
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE2/0/0                   0x00000000
    
     IPV4 Dest  : 10.3.1.0/24        Int. Cost : 110           Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.2.1.2                           GE2/0/0                   0x0000000a
     TI-LFA:        
     Interface  : GE3/0/0                                                               
     NextHop    : 10.5.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {}
    
     IPV4 Dest  : 10.4.1.0/24        Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
    
     IPV4 Dest  : 10.5.1.0/24        Int. Cost : 10            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : D/-/L/-
     Priority   : -                  Age       : 00:03:44
     NextHop    :                    Interface :               ExitIndex :
        Direct                             GE3/0/0                    0x00000000
    
     IPV4 Dest  : 10.6.1.0/24        Int. Cost : 20            Ext. Cost : NULL
     Admin Tag  : -                  Src Count : 2             Flags     : A/-/-/-
     Priority   : Low                Age       : 00:03:21
     NextHop    :                    Interface :               ExitIndex :
        10.5.1.2                           GE3/0/0                    0x00000007
     TI-LFA:        
     Interface  : GE2/0/0                                                              
     NextHop    : 10.2.1.2           LsIndex    : 0x00000003   ProtectType: L
     Backup Label Stack (Top -> Bottom): {48142}
         Flags: D-Direct, A-Added to URT, L-Advertised in LSPs, S-IGP Shortcut, 
                U-Up/Down Bit Set, LP-Local Prefix-Sid
         Protect Type: L-Link Protect, N-Node Protect

  7. Verify the configuration.

    # Run a tracert command on Device A to check the connectivity of the SR-MPLS TE tunnel to Device F. For example:

    [~DeviceA] tracert lsp segment-routing te Tunnel 1
      LSP Trace Route FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.2/[16050 16060 ]
      1      10.1.1.2           2 ms    Transit   10.5.1.2/[3 ]
      2      10.5.1.2           3 ms    Transit   10.6.1.2/[3 ]
      3      6.6.6.9            3 ms    Egress 

    # Run the shutdown command on GE 3/0/0 of DeviceB to simulate a link fault between DeviceB and DeviceE.

    [~DeviceB] interface gigabitethernet3/0/0
    [~DeviceB-GigabitEthernet3/0/0] shutdown
    [*DeviceB-GigabitEthernet3/0/0] quit
    [*DeviceB] commit

    # Run the tracert command on DeviceA again to check the connectivity of the SR-MPLS TE tunnel. For example:

    [~DeviceA] tracert lsp segment-routing te Tunnel 1
      LSP Trace Route FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break.
      TTL    Replier            Time    Type      Downstream
      0                                 Ingress   10.1.1.2/[16050 16060 ]
      1      10.1.1.2           3 ms    Transit   10.2.1.2/[16050 ]
      2      10.2.1.2           4 ms    Transit   10.3.1.2/[16050 ]
      3      10.3.1.2           4 ms    Transit   10.4.1.2/[3 ]
      4      10.4.1.2           3 ms    Transit   10.6.1.2/[3 ]
      5      6.6.6.9            5 ms    Egress 

    The preceding command output shows that the SR-MPLS TE tunnel has been switched to the TI-LFA FRR backup path.

Configuration Files

  • Device A configuration file

    #
    sysname DeviceA
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls
     mpls te 
    #
    explicit-path p1
     next sid label 16020 type prefix index 1
     next sid label 16050 type prefix index 2
     next sid label 16060 type prefix index 3
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #               
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 6.6.6.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path p1 
    #
    return
  • Device B configuration file

    #
    sysname DeviceB
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls  
     mpls te          
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     avoid-microloop frr-protected
     avoid-microloop frr-protected rib-update-delay 5000
     segment-routing mpls
     segment-routing global-block 16000 23999
     avoid-microloop segment-routing
     avoid-microloop segment-routing rib-update-delay 10000
     frr
      loop-free-alternate level-1
      ti-lfa level-1
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.2.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.5.1.1 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • Device C configuration file

    #
    sysname DeviceC
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls  
     mpls te          
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.2.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.3.1.1 255.255.255.0
     isis enable 1 
     isis cost 100
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #
    return
  • Device D configuration file

    #
    sysname DeviceD
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls  
     mpls te          
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.3.1.2 255.255.255.0
     isis enable 1  
     isis cost 100
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return
  • Device E configuration file

    #
    sysname DeviceE
    #
    mpls lsr-id 5.5.5.9
    #               
    mpls  
     mpls te          
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0005.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.6.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.5.1.2 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 5.5.5.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 50
    #
    return
  • Device F configuration file

    #
    sysname DeviceF
    #
    mpls lsr-id 6.6.6.9
    #               
    mpls     
     mpls te       
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0006.00
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.6.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 6.6.6.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 60
    #
    return

Example for Configuring an E2E SR-MPLS TE Tunnel (Explicit Path Used)

An inter-AS E2E SR-MPLS TE tunnel can be configured to provide a secure data channel for services, for example, inter-AS VPN services.

Networking Requirements

On the network shown in Figure 1-2677, PE1 and ASBR1 reside in AS 100, PE2 and ASBR2 reside in AS 200, and ASBR1 and ASBR2 are directly connected through two physical links. A bidirectional E2E tunnel needs to be established between PE1 and PE2. SR is used to for path generation and data forwarding. In the PE1-to-PE2 direction, PE1 and PE2 are the ingress and egress of the path, respectively. In the PE2-to-PE1 direction, PE2 and PE1 are the ingress and egress of the path, respectively.

Figure 1-2677 E2E SR-MPLS TE tunnel networking

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure intra-AS SR-MPLS TE tunnels in AS 100 and AS 200. Set binding SIDs for the SR-MPLS TE tunnels.

  2. Configure an EBGP peer relationship between ASBR1 and ASBR2, enable BGP EPE and BGP-LS, and enable the devices to generate BGP peer SIDs.

  3. Create an E2E SR-MPLS TE tunnel interface on PE1 and PE2. Specify the IP address, tunneling protocol, and destination address of each tunnel. Explicit paths are used for path calculation.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of interfaces as shown in Figure 1-2677

  • IS-IS process ID (1), IS-IS level (Level-2), and IS-IS system IDs (10.0000.0000.0001.00, 10.0000.0000.0002.00, 10.0000.0000.0003.00, and 10.0000.0000.0004.00)

  • AS number (100) of PE1 and ASBR1 and that (200) of PE2 and ASBR2

  • SR-MPLS TE tunnel interface names in AS 100 (Tunnel1) and AS 200 (Tunnel2); tunnel interface name of the PE1-to-PE2 E2E SR-MPLS TE tunnel (Tunnel3) and that of the PE2-to-PE1 E2E SR-MPLS TE tunnel (Tunnel3)

Procedure

  1. Assign an IP address and a mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.0.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure ASBR1.

    <HUAWEI> system-view
    [~HUAWEI] sysname ASBR1
    [*HUAWEI] commit
    [~ASBR1] interface loopback 1
    [*ASBR1-LoopBack1] ip address 2.2.2.2 32
    [*ASBR1-LoopBack1] quit
    [*ASBR1] interface gigabitethernet1/0/0
    [*ASBR1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*ASBR1-GigabitEthernet1/0/0] quit
    [*ASBR1] interface gigabitethernet2/0/0
    [*ASBR1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*ASBR1-GigabitEthernet2/0/0] quit
    [*ASBR1] interface gigabitethernet3/0/0
    [*ASBR1-GigabitEthernet3/0/0] ip address 10.0.1.2 24
    [*ASBR1-GigabitEthernet3/0/0] quit
    [*ASBR1] commit

    # Configure ASBR2.

    <HUAWEI> system-view
    [~HUAWEI] sysname ASBR2
    [*HUAWEI] commit
    [~ASBR2] interface loopback 1
    [*ASBR2-LoopBack1] ip address 3.3.3.3 32
    [*ASBR2-LoopBack1] quit
    [*ASBR2] interface gigabitethernet1/0/0
    [*ASBR2-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*ASBR2-GigabitEthernet1/0/0] quit
    [*ASBR2] interface gigabitethernet2/0/0
    [*ASBR2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*ASBR2-GigabitEthernet2/0/0] quit
    [*ASBR2] interface gigabitethernet3/0/0
    [*ASBR2-GigabitEthernet3/0/0] ip address 10.9.1.1 24
    [*ASBR2-GigabitEthernet3/0/0] quit
    [*ASBR2] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 4.4.4.4 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.9.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Configure an intra-AS SR-MPLS TE tunnel in AS 100.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] isis prefix-sid absolute 16100
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls te
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] explicit-path path2asbr1
    [*PE1-explicit-path-path2asbr1] next sid label 330102 type adjacency
    [*PE1-explicit-path-path2asbr1] quit
    [*PE1] interface tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 2.2.2.2
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path path2asbr1
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit

    In the preceding steps, the next sid label command uses the adjacency label from PE1 to ASBR1, which is dynamically generated by IS-IS. Before the configuration, you can run the display segment-routing adjacency mpls forwarding command to query the label value. For example:

    [~PE1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    330102    GE1/0/0           10.0.1.2         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 1

    # Configure ASBR1.

    [~ASBR1] mpls lsr-id 2.2.2.2
    [*ASBR1] mpls
    [*ASBR1-mpls] mpls te
    [*ASBR1-mpls] quit
    [*ASBR1] segment-routing
    [*ASBR1-segment-routing] quit
    [*ASBR1] isis 1
    [*ASBR1-isis-1] is-level level-2
    [*ASBR1-isis-1] network-entity 10.0000.0000.0002.00
    [*ASBR1-isis-1] cost-style wide
    [*ASBR1-isis-1] traffic-eng level-2
    [*ASBR1-isis-1] segment-routing mpls
    [*ASBR1-isis-1] quit
    [*ASBR1] interface loopback 1
    [*ASBR1-LoopBack1] isis enable 1
    [*ASBR1-LoopBack1] isis prefix-sid absolute 16200
    [*ASBR1-LoopBack1] quit
    [*ASBR1] interface gigabitethernet3/0/0
    [*ASBR1-GigabitEthernet3/0/0] isis enable 1
    [*ASBR1-GigabitEthernet3/0/0] mpls
    [*ASBR1-GigabitEthernet3/0/0] mpls te
    [*ASBR1-GigabitEthernet3/0/0] quit
    [*ASBR1] explicit-path path2pe1
    [*ASBR1-explicit-path-path2pe1] next sid label 330201 type adjacency
    [*ASBR1-explicit-path-path2pe1] quit
    [*ASBR1] interface tunnel1
    [*ASBR1-Tunnel1] ip address unnumbered interface loopback 1
    [*ASBR1-Tunnel1] tunnel-protocol mpls te
    [*ASBR1-Tunnel1] destination 1.1.1.1
    [*ASBR1-Tunnel1] mpls te tunnel-id 1
    [*ASBR1-Tunnel1] mpls te signal-protocol segment-routing
    [*ASBR1-Tunnel1] mpls te path explicit-path path2pe1
    [*ASBR1-Tunnel1] commit
    [~ASBR1-Tunnel1] quit

    In the preceding step, an ASBR1-to-PE1 adjacency label is used in the next sid label command and is dynamically generated using IS-IS. To obtain the label value, run the display segment-routing adjacency mpls forwarding command. For example:

    [~ASBR1] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------------
    330201    GE3/0/0           10.0.1.1         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 1

  3. Configure an intra-AS SR-MPLS TE tunnel in AS 200.

    # Configure ASBR2.

    [~ASBR2] mpls lsr-id 3.3.3.3
    [*ASBR2] mpls
    [*ASBR2-mpls] mpls te
    [*ASBR2-mpls] quit
    [*ASBR2] segment-routing
    [*ASBR2-segment-routing] quit
    [*ASBR2] isis 1
    [*ASBR2-isis-1] is-level level-2
    [*ASBR2-isis-1] network-entity 10.0000.0000.0003.00
    [*ASBR2-isis-1] cost-style wide
    [*ASBR2-isis-1] traffic-eng level-2
    [*ASBR2-isis-1] segment-routing mpls
    [*ASBR2-isis-1] quit
    [*ASBR2] interface loopback 1
    [*ASBR2-LoopBack1] isis enable 1
    [*ASBR2-LoopBack1] isis prefix-sid absolute 16300
    [*ASBR2-LoopBack1] quit
    [*ASBR2] interface gigabitethernet3/0/0
    [*ASBR2-GigabitEthernet3/0/0] isis enable 1
    [*ASBR2-GigabitEthernet3/0/0] mpls
    [*ASBR2-GigabitEthernet3/0/0] mpls te
    [*ASBR2-GigabitEthernet3/0/0] quit
    [*ASBR2] explicit-path path2pe2
    [*ASBR2-explicit-path-path2pe2] next sid label 330304 type adjacency
    [*ASBR2-explicit-path-path2pe2] quit
    [*ASBR2] interface tunnel2
    [*ASBR2-Tunnel2] ip address unnumbered interface loopback 1
    [*ASBR2-Tunnel2] tunnel-protocol mpls te
    [*ASBR2-Tunnel2] destination 4.4.4.4
    [*ASBR2-Tunnel2] mpls te tunnel-id 1
    [*ASBR2-Tunnel2] mpls te signal-protocol segment-routing
    [*ASBR2-Tunnel2] mpls te path explicit-path path2pe2
    [*ASBR2-Tunnel2] commit
    [~ASBR2-Tunnel2] quit

    In the preceding step, an ASBR2-to-PE2 adjacency label is used in the next sid label command and is dynamically generated using IS-IS. To obtain the label value, run the display segment-routing adjacency mpls forwarding command. For example:

    [~ASBR2] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------------
    330304    GE3/0/0           10.9.1.2         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 1

    # Configure PE2.

    [~PE2] mpls lsr-id 4.4.4.4
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] network-entity 10.0000.0000.0004.00
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] isis prefix-sid absolute 16400
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] mpls
    [*PE2-GigabitEthernet1/0/0] mpls te
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] explicit-path path2asbr2
    [*PE2-explicit-path-path2asbr2] next sid label 330403 type adjacency
    [*PE2-explicit-path-path2asbr2] quit
    [*PE2] interface tunnel2
    [*PE2-Tunnel2] ip address unnumbered interface loopback 1
    [*PE2-Tunnel2] tunnel-protocol mpls te
    [*PE2-Tunnel2] destination 3.3.3.3
    [*PE2-Tunnel2] mpls te tunnel-id 1
    [*PE2-Tunnel2] mpls te signal-protocol segment-routing
    [*PE2-Tunnel2] mpls te path explicit-path path2asbr2
    [*PE2-Tunnel2] commit
    [~PE2-Tunnel2] quit

    In the preceding step, a PE2-to-ASBR2 adjacency label is used in the next sid label command and is dynamically generated using IS-IS. To obtain the label value, run the display segment-routing adjacency mpls forwarding command. For example:

    [~PE2] display segment-routing adjacency mpls forwarding
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------------
    330403    GE1/0/0           10.9.1.1         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 1

  4. Configure the MPLS TE capability on ASBRs.

    # Configure ASBR1.

    [~ASBR1] interface gigabitethernet1/0/0
    [~ASBR1-GigabitEthernet1/0/0] mpls
    [*ASBR1-GigabitEthernet1/0/0] mpls te
    [*ASBR1-GigabitEthernet1/0/0] quit
    [*ASBR1] interface gigabitethernet2/0/0
    [*ASBR1-GigabitEthernet2/0/0] mpls
    [*ASBR1-GigabitEthernet2/0/0] mpls te
    [*ASBR1-GigabitEthernet2/0/0] quit
    [*ASBR1] commit

    # Configure ASBR2.

    [~ASBR2] interface gigabitethernet1/0/0
    [~ASBR2-GigabitEthernet1/0/0] mpls
    [*ASBR2-GigabitEthernet1/0/0] mpls te
    [*ASBR2-GigabitEthernet1/0/0] quit
    [*ASBR2] interface gigabitethernet2/0/0
    [*ASBR2-GigabitEthernet2/0/0] mpls
    [*ASBR2-GigabitEthernet2/0/0] mpls te
    [*ASBR2-GigabitEthernet2/0/0] quit
    [*ASBR2] commit

  5. Establish an EBGP peer relationship between ASBRs and enable BGP EPE and BGP-LS.

    In this example, a loopback interface is used to establish a multi-hop EBGP peer relationship. Before the configuration, ensure that the loopback interfaces of ASBR1 and ASBR2 are routable to each other.

    BGP EPE supports only EBGP peer relationships. Multi-hop EBGP peers must be directly connected using physical links. If intermediate nodes exist, no BGP peer SID is set on them, which causes forwarding failures.

    # Configure ASBR1.

    [~ASBR1] ip route-static 3.3.3.3 32 gigabitethernet1/0/0 10.1.1.2 description asbr1toasbr2
    [*ASBR1] ip route-static 3.3.3.3 32 gigabitethernet2/0/0 10.2.1.2 description asbr1toasbr2
    [*ASBR1] bgp 100
    [*ASBR1-bgp] peer 3.3.3.3 as-number 200
    [*ASBR1-bgp] peer 3.3.3.3 connect-interface loopback 1
    [*ASBR1-bgp] peer 3.3.3.3 ebgp-max-hop 2
    [*ASBR1-bgp] peer 3.3.3.3 egress-engineering
    [*ASBR1-bgp] link-state-family unicast
    [*ASBR1-bgp-af-ls] quit
    [*ASBR1-bgp] ipv4-family unicast
    [*ASBR1-bgp-af-ipv4] network 2.2.2.2 32
    [*ASBR1-bgp-af-ipv4] network 10.1.1.0 24
    [*ASBR1-bgp-af-ipv4] network 10.2.1.0 24
    [*ASBR1-bgp-af-ipv4] import-route isis 1
    [*ASBR1-bgp-af-ipv4] commit
    [~ASBR1-bgp-af-ipv4] quit
    [~ASBR1-bgp] quit

    # Configure ASBR2.

    [~ASBR2] ip route-static 2.2.2.2 32 gigabitethernet1/0/0 10.1.1.1 description asbr2toasbr1
    [*ASBR2] ip route-static 2.2.2.2 32 gigabitethernet2/0/0 10.2.1.1 description asbr2toasbr1
    [*ASBR2] bgp 200
    [*ASBR2-bgp] peer 2.2.2.2 as-number 100
    [*ASBR2-bgp] peer 2.2.2.2 connect-interface loopback 1
    [*ASBR2-bgp] peer 2.2.2.2 ebgp-max-hop 2
    [*ASBR2-bgp] peer 2.2.2.2 egress-engineering
    [*ASBR1-bgp] link-state-family unicast
    [*ASBR1-bgp-af-ls] quit
    [*ASBR2-bgp] ipv4-family unicast
    [*ASBR2-bgp-af-ipv4] network 3.3.3.3 32
    [*ASBR2-bgp-af-ipv4] network 10.1.1.0 24
    [*ASBR2-bgp-af-ipv4] network 10.2.1.0 24
    [*ASBR2-bgp-af-ipv4] import-route isis 1
    [*ASBR2-bgp-af-ipv4] commit
    [~ASBR2-bgp-af-ipv4] quit
    [~ASBR2-bgp] quit

    After the configuration, run the display bgp egress-engineering command to check BGP EPE information. For example:

    [~ASBR1] display bgp egress-engineering
     Peer Node                : 3.3.3.3
     Peer Adj Num             : 2
     Local ASN                : 100
     Remote ASN               : 200
     Local Router Id          : 2.2.2.2
     Remote Router Id         : 3.3.3.3
     Local Interface Address  : 2.2.2.2
     Remote Interface Address : 3.3.3.3
     SID Label                : 32768
     Peer Set SID Label       : --
     Nexthop                  : 10.1.1.2
     Out Interface            : GigabitEthernet1/0/0
     Nexthop                  : 10.2.1.2
     Out Interface            : GigabitEthernet2/0/0
    
     Peer Adj                 : 10.1.1.2
     Local ASN                : 100
     Remote ASN               : 200
     Local Router Id          : 2.2.2.2
     Remote Router Id         : 3.3.3.3
     Interface Identifier     : 6
     Local Interface Address  : 10.1.1.1
     Remote Interface Address : 10.1.1.2
     SID Label                : 32770
     Nexthop                  : 10.1.1.2
     Out Interface            : GigabitEthernet1/0/0
     
     Peer Adj                 : 10.2.1.2
     Local ASN                : 100
     Remote ASN               : 200
     Local Router Id          : 2.2.2.2
     Remote Router Id         : 3.3.3.3
     Interface Identifier     : 7
     Local Interface Address  : 10.2.1.1
     Remote Interface Address : 10.2.1.2
     SID Label                : 32769
     Nexthop                  : 10.2.1.2
     Out Interface            : GigabitEthernet2/0/0
    [~ASBR2] display bgp egress-engineering
     Peer Node                : 2.2.2.2
     Peer Adj Num             : 2
     Local ASN                : 200
     Remote ASN               : 100
     Local Router Id          : 3.3.3.3
     Remote Router Id         : 2.2.2.2
     Local Interface Address  : 3.3.3.3
     Remote Interface Address : 2.2.2.2
     SID Label                : 31768
     Peer Set SID Label       : --
     Nexthop                  : 10.1.1.1
     Out Interface            : GigabitEthernet1/0/0
     Nexthop                  : 10.2.1.1
     Out Interface            : GigabitEthernet2/0/0
    
     Peer Adj                 : 10.1.1.1
     Local ASN                : 200
     Remote ASN               : 100
     Local Router Id          : 3.3.3.3
     Remote Router Id         : 2.2.2.2
     Interface Identifier     : 6
     Local Interface Address  : 10.1.1.2
     Remote Interface Address : 10.1.1.1
     SID Label                : 31770
     Nexthop                  : 10.1.1.1
     Out Interface            : GigabitEthernet1/0/0
    
     Peer Adj                 : 10.2.1.1
     Local ASN                : 200
     Remote ASN               : 100
     Local Router Id          : 3.3.3.3
     Remote Router Id         : 2.2.2.2
     Interface Identifier     : 7
     Local Interface Address  : 10.2.1.2
     Remote Interface Address : 10.2.1.1
     SID Label                : 31769
     Nexthop                  : 10.2.1.1
     Out Interface            : GigabitEthernet2/0/0

  6. Set binding SIDs for the SR-MPLS TE tunnels within AS domains.

    In the direction from PE1 to PE2:

    # Configure PE1.

    [~PE1] interface tunnel1
    [*PE1-Tunnel1] mpls te binding-sid label 1000
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit

    # Configure ASBR2.

    [~ASBR2] interface tunnel2
    [*ASBR2-Tunnel2] mpls te binding-sid label 2000
    [*ASBR2-Tunnel2] commit
    [~ASBR2-Tunnel2] quit

    In the direction from PE2 to PE1:

    # Configure PE2.

    [~PE2] interface tunnel2
    [*PE2-Tunnel2] mpls te binding-sid label 3000
    [*PE2-Tunnel2] commit
    [~PE2-Tunnel2] quit

    # Configure ASBR1.

    [~ASBR1] interface tunnel1
    [*ASBR1-Tunnel1] mpls te binding-sid label 4000
    [*ASBR1-Tunnel1] commit
    [~ASBR1-Tunnel1] quit

  7. Configure a bidirectional E2E SR-MPLS TE tunnel between PE1 and PE2.

    In the direction from PE1 to PE2:

    # Configure PE1. There are multiple links between the ASBRs. You can select any of them. In this example, the link ASBR1 (GigabitEthernet 1/0/0) -> ASBR2 (GigabitEthernet 1/0/0) is selected.

    [~PE1] explicit-path path2pe2
    [*PE1-explicit-path-path2pe2] next sid label 1000 type binding-sid
    [*PE1-explicit-path-path2pe2] next sid label 32770 type adjacency
    [*PE1-explicit-path-path2pe2] next sid label 2000 type binding-sid
    [*PE1-explicit-path-path2pe2] quit
    [*PE1] interface tunnel3
    [*PE1-Tunnel3] ip address unnumbered interface loopback 1
    [*PE1-Tunnel3] tunnel-protocol mpls te
    [*PE1-Tunnel3] destination 4.4.4.4
    [*PE1-Tunnel3] mpls te tunnel-id 100
    [*PE1-Tunnel3] mpls te signal-protocol segment-routing
    [*PE1-Tunnel3] mpls te path explicit-path path2pe2
    [*PE1-Tunnel3] commit
    [~PE1-Tunnel3] quit

    In the direction from PE2 to PE1:

    # Configure PE2. There are multiple links between the ASBRs. You can select any of them. In this example, the link ASBR2 (GigabitEthernet 1/0/0) -> ASBR1 (GigabitEthernet 1/0/0) is selected.

    [~PE2] explicit-path path2pe1
    [*PE2-explicit-path-path2pe1] next sid label 3000 type binding-sid
    [*PE2-explicit-path-path2pe1] next sid label 31770 type adjacency
    [*PE2-explicit-path-path2pe1] next sid label 4000 type binding-sid
    [*PE2-explicit-path-path2pe1] quit
    [*PE2] interface tunnel3
    [*PE2-Tunnel3] ip address unnumbered interface loopback 1
    [*PE2-Tunnel3] tunnel-protocol mpls te
    [*PE2-Tunnel3] destination 1.1.1.1
    [*PE2-Tunnel3] mpls te tunnel-id 400
    [*PE2-Tunnel3] mpls te signal-protocol segment-routing
    [*PE2-Tunnel3] mpls te path explicit-path path2pe1
    [*PE2-Tunnel3] commit
    [~PE2-Tunnel3] quit

  8. Verify the configuration.

    After completing the configuration, run the display mpls te tunnel-interface tunnel-name command. The command output shows that the E2E SR-MPLS TE tunnel named Tunnel3 is Up. For example:

    # Check the status on PE1.
    [~PE1] display mpls te tunnel-interface tunnel3
        Tunnel Name       : Tunnel3
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP
        Traffic Switch      : - 
        Session ID          : 100
        Ingress LSR ID      : 1.1.1.1               Egress LSR ID: 4.4.4.4
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 2
        Tie-Breaking Policy : None                  Metric Type  : TE
        Bfd Cap             : None               
        Reopt               : Disabled              Reopt Freq   : - 
        Inter-area Reopt    : Disabled              
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : -
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0
        Primary Tunnel      : -                     Pri Tunn Sum : -
        Backup Tunnel       : -
        Group Status        : Up                    Oam Status   : None
        IPTN InLabel        : -                     Tunnel BFD Status : -
        BackUp LSP Type     : None                  BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : No                     LSP Control Status : Local control
        Path Verification : No
        Entropy Label     : -
        Associated Tunnel Group ID: -              Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                  Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -
        Metric Inherit IGP : None
        Binding Sid       : 2001                  Reverse Binding Sid : 2002 
        FRR Attr Source   : -                     Is FRR degrade down : -
        Color             : - 
                                                       
        Primary LSP ID      : 1.1.1.1:3
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        SidProtectType      : - 
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : path2pe2                         Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : EXPLICIT
        Create Modify LSP Reason: -
    # Check the status on PE2.
    [~PE2] display mpls te tunnel-interface tunnel3
        Tunnel Name       : Tunnel3
        Signalled Tunnel Name: -
        Tunnel State Desc : CR-LSP is Up
        Tunnel Attributes   :     
        Active LSP          : Primary LSP
        Traffic Switch      : - 
        Session ID          : 400
        Ingress LSR ID      : 4.4.4.4               Egress LSR ID: 1.1.1.1
        Admin State         : UP                    Oper State   : UP
        Signaling Protocol  : Segment-Routing
        FTid                : 65
        Tie-Breaking Policy : None                  Metric Type  : TE
        Bfd Cap             : None                  
        Reopt               : Disabled              Reopt Freq   : -   
        Inter-area Reopt    : Disabled           
        Auto BW             : Disabled              Threshold    : - 
        Current Collected BW: -                     Auto BW Freq : -
        Min BW              : -                     Max BW       : -
        Offload             : Disabled              Offload Freq : - 
        Low Value           : -                     High Value   : - 
        Readjust Value      : - 
        Offload Explicit Path Name: -
        Tunnel Group        : Primary
        Interfaces Protected: -
        Excluded IP Address : -
        Referred LSP Count  : 0
        Primary Tunnel      : -                     Pri Tunn Sum : -
        Backup Tunnel       : -
        Group Status        : Up                    Oam Status   : None
        IPTN InLabel        : -                     Tunnel BFD Status : -
        BackUp LSP Type     : None                  BestEffort   : -
        Secondary HopLimit  : -
        BestEffort HopLimit  : -
        Secondary Explicit Path Name: -
        Secondary Affinity Prop/Mask: 0x0/0x0
        BestEffort Affinity Prop/Mask: -  
        IsConfigLspConstraint: -
        Hot-Standby Revertive Mode:  Revertive
        Hot-Standby Overlap-path:  Disabled
        Hot-Standby Switch State:  CLEAR
        Bit Error Detection:  Disabled
        Bit Error Detection Switch Threshold:  -
        Bit Error Detection Resume Threshold:  -
        Ip-Prefix Name    : -
        P2p-Template Name : -
        PCE Delegate      : No                     LSP Control Status : Local control
        Path Verification : No
        Entropy Label     : -
        Associated Tunnel Group ID: -              Associated Tunnel Group Type: -
        Auto BW Remain Time   : -                  Reopt Remain Time     : - 
        Segment-Routing Remote Label   : -
        Metric Inherit IGP : None
        Binding Sid       : 2002                  Reverse Binding Sid : 2001 
        FRR Attr Source   : -                     Is FRR degrade down : -
                                 
        Primary LSP ID      : 4.4.4.4:4
        LSP State           : UP                    LSP Type     : Primary
        Setup Priority      : 7                     Hold Priority: 7
        IncludeAll          : 0x0
        IncludeAny          : 0x0
        ExcludeAny          : 0x0
        Affinity Prop/Mask  : 0x0/0x0               Resv Style   :  SE
        Configured Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Actual Bandwidth Information:
        CT0 Bandwidth(Kbit/sec): 0               CT1 Bandwidth(Kbit/sec): 0
        CT2 Bandwidth(Kbit/sec): 0               CT3 Bandwidth(Kbit/sec): 0
        CT4 Bandwidth(Kbit/sec): 0               CT5 Bandwidth(Kbit/sec): 0
        CT6 Bandwidth(Kbit/sec): 0               CT7 Bandwidth(Kbit/sec): 0
        Explicit Path Name  : path2pe1                         Hop Limit: -
        Record Route        : -                            Record Label : -
        Route Pinning       : -
        FRR Flag            : -
        IdleTime Remain     : -
        BFD Status          : -
        Soft Preemption     : -
        Reroute Flag        : -
        Pce Flag            : Normal
        Path Setup Type     : EXPLICIT
        Create Modify LSP Reason: -

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     mpls te
    #
    explicit-path path2asbr1
     next sid label 330102 type adjacency index 1
    #
    explicit-path path2pe2
     next sid label 1000 type binding-sid index 1
     next sid label 32770 type adjacency index 2
     next sid label 2000 type binding-sid index 3
    #               
    segment-routing 
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.0.1.1 255.255.255.0
     isis enable 1  
     mpls
     mpls te
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16100
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 2.2.2.2
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path2asbr1
     mpls te binding-sid label 1000
    #
    interface Tunnel3
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.4
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 100
     mpls te path explicit-path path2pe2
    #
    return
  • ASBR1 configuration file

    #
    sysname ASBR1
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
    #
    explicit-path path2pe1
     next sid label 330201 type adjacency index 1
    #               
    segment-routing 
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.1 255.255.255.0
     mpls
     mpls te
    #
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.2.1.1 255.255.255.0
     mpls
     mpls te
    #
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.0.1.2 255.255.255.0
     isis enable 1  
     mpls
     mpls te
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16200
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path2pe1
     mpls te binding-sid label 4000
    #
    bgp 100
     peer 3.3.3.3 as-number 200
     peer 3.3.3.3 ebgp-max-hop 2
     peer 3.3.3.3 connect-interface LoopBack1
     peer 3.3.3.3 egress-engineering
     #
     ipv4-family unicast
      network 2.2.2.2 255.255.255.255
      network 10.1.1.0 255.255.255.0
      network 10.2.1.0 255.255.255.0
      import-route isis 1
     #
     link-state-family unicast
    #
    ip route-static 3.3.3.3 255.255.255.255 gigabitethernet1/0/0 10.1.1.2 description asbr1toasbr2
    ip route-static 3.3.3.3 255.255.255.255 gigabitethernet2/0/0 10.2.1.2 description asbr1toasbr2
    #
    return
  • ASBR2 configuration file

    #
    sysname ASBR2
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    explicit-path path2pe2
     next sid label 330304 type adjacency index 1
    #               
    segment-routing 
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.2 255.255.255.0
     mpls
     mpls te
    #
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.2.1.2 255.255.255.0
     mpls
     mpls te
    #
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.9.1.1 255.255.255.0
     isis enable 1  
     mpls
     mpls te
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16300
    #
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.4
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path2pe2
     mpls te binding-sid label 2000
    #
    bgp 200
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 ebgp-max-hop 2
     peer 2.2.2.2 connect-interface LoopBack 1
     peer 2.2.2.2 egress-engineering
     #
     ipv4-family unicast
      network 3.3.3.3 255.255.255.255
      network 10.1.1.0 255.255.255.0
      network 10.2.1.0 255.255.255.0
      import-route isis 1
     #
     link-state-family unicast
    #
    ip route-static 2.2.2.2 255.255.255.255 gigabitethernet1/0/0 10.1.1.1 description asbr2toasbr1
    ip route-static 2.2.2.2 255.255.255.255 gigabitethernet2/0/0 10.2.1.1 description asbr2toasbr1
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 4.4.4.4
    #
    mpls
     mpls te
    #
    explicit-path path2asbr2
     next sid label 330403 type adjacency index 1
    #
    explicit-path path2pe1
     next sid label 3000 type binding-sid index 1
     next sid label 31770 type adjacency index 2
     next sid label 4000 type binding-sid index 3
    #               
    segment-routing 
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-2
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.9.1.2 255.255.255.0
     isis enable 1  
     mpls
     mpls te
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
     isis prefix-sid absolute 16400
    #
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.3
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path2asbr2
     mpls te binding-sid label 3000
    #
    interface Tunnel3
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.1
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 400
     mpls te path explicit-path path2pe1
    #
    return

Example for Configuring CBTS in an L3VPN over SR-MPLS TE Scenario

This section provides an example for configuring class-of-service-based tunnel selection (CBTS) in an L3VPN over SR-MPLS TE scenario.

Networking Requirements

On the network shown in Figure 1-2678, CE1 and CE2 belong to the same L3VPN. They access the public network through PE1 and PE2, respectively. Various types of services are transmitted between CE1 and CE2. Transmitting a large number of common services deteriorates the efficiency of transmitting important services. To prevent this problem, configure the CBTS function. This function allows traffic of a specific service class to be transmitted along a specified tunnel.

In this example, Tunnel1 and Tunnel2 on PE1 transmit important services, and Tunnel3 transmits other services.

If the CBTS function is configured, you are advised not to configure any of the following functions:

  • Mixed load balancing
  • Seamless MPLS
  • Dynamic load balancing
Figure 1-2678 CBTS networking in an L3VPN over SR-MPLS TE scenario

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address and its mask to each interface and configure a loopback interface address as an LSR ID on each node.

  2. Enable IS-IS globally, configure network entity titles (NETs), specify the cost type, and enable IS-IS TE. Enable IS-IS on interfaces, including loopback interfaces.

  3. Set MPLS LSR IDs and enable MPLS and MPLS TE globally.

  4. Enable MPLS and MPLS TE on each interface.

  5. Configure the maximum reservable link bandwidth and BC for the outbound interface of each involved tunnel.

  6. Create a tunnel interface on the ingress and configure the IP address, tunnel protocol, destination IP address, and tunnel bandwidth.

  7. Configure multi-field classification on PE1.

  8. Configure a VPN instance and apply a tunnel policy on PE1.

Data Preparation

To complete the configuration, you need the following data:

  • IS-IS area ID, originating system ID, and IS-IS level on each node

  • Maximum reservable link bandwidth of each tunnel

  • Tunnel interface number, IP address, destination IP address, tunnel ID, and tunnel bandwidth

  • Traffic classifier name, traffic behavior name, and traffic policy name

Procedure

  1. Assign an IP address and its mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 3.3.3.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.2.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.3.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 4.4.4.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Configure IS-IS to advertise routes.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] network-entity 00.0005.0000.0000.0001.00
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] commit
    [~PE1-LoopBack1] quit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] network-entity 00.0005.0000.0000.0002.00
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] quit
    [*P1] interface gigabitethernet 1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet 2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] commit
    [~P1-LoopBack1] quit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] network-entity 00.0005.0000.0000.0003.00
    [*P2-isis-1] is-level level-2
    [*P2-isis-1] quit
    [*P2] interface gigabitethernet 1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet 2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] commit
    [~P2-LoopBack1] quit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] network-entity 00.0005.0000.0000.0004.00
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] quit
    [*PE2] interface gigabitethernet 1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] commit
    [~PE2-LoopBack1] quit

    After completing the preceding configurations, run the display ip routing-table command on each node to check that both PEs have learned routes from each other. The following example uses the command output on PE1.

    [~PE1] display ip routing-table
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
             Destinations : 13       Routes : 13        
    
    Destination/Mask    Proto     Pre  Cost        Flags NextHop         Interface
    
            1.1.1.9/32  Direct    0    0             D  127.0.0.1       LoopBack1
            2.2.2.9/32  ISIS-L2   15   10            D  10.1.1.2        GigabitEthernet1/0/0
            3.3.3.9/32  ISIS-L2   15   20            D  10.1.1.2        GigabitEthernet1/0/0
            4.4.4.9/32  ISIS-L2   15   30            D  10.1.1.2        GigabitEthernet1/0/0
           10.1.1.0/24  Direct    0    0             D  10.1.1.1        GigabitEthernet1/0/0
           10.1.1.1/32  Direct    0    0             D  127.0.0.1       GigabitEthernet1/0/0
         10.1.1.255/32  Direct    0    0             D  127.0.0.1       GigabitEthernet1/0/0
           10.2.1.0/24  ISIS-L2   15   20            D  10.1.1.2        GigabitEthernet1/0/0
           10.3.1.0/24  ISIS-L2   15   30            D  10.1.1.2        GigabitEthernet1/0/0
          127.0.0.0/8   Direct    0    0             D  127.0.0.1       InLoopBack0
          127.0.0.1/32  Direct    0    0             D  127.0.0.1       InLoopBack0
    127.255.255.255/32  Direct    0    0             D  127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct    0    0             D  127.0.0.1       InLoopBack0 

  3. Establish an MP-IBGP peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 4.4.4.9 as-number 100
    [*PE1-bgp] peer 4.4.4.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 4.4.4.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on the PEs to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      4.4.4.9         4   100        2        6     0     00:11:25   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      4.4.4.9         4   100   19      21         0     00:19:43   Established   0

  4. Configure basic MPLS functions and enable MPLS TE.

    # Enable MPLS and MPLS TE both globally and on specific interfaces for nodes along each tunnel.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls te
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] mpls te
    [*P1-mpls] quit
    [*P1] interface gigabitethernet 1/0/0
    [*P1-GigabitEthernet1/0/0] mpls
    [*P1-GigabitEthernet1/0/0] mpls te
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet 2/0/0
    [*P1-GigabitEthernet2/0/0] mpls
    [*P1-GigabitEthernet2/0/0] mpls te
    [*P1-GigabitEthernet2/0/0] commit
    [~P1-GigabitEthernet2/0/0] quit

    # Configure P2.

    [~P2] mpls lsr-id 3.3.3.9
    [*P2] mpls
    [*P2-mpls] mpls te
    [*P2-mpls] quit
    [*P2] interface gigabitethernet 1/0/0
    [*P2-GigabitEthernet1/0/0] mpls
    [*P2-GigabitEthernet1/0/0] mpls te
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet 2/0/0
    [*P2-GigabitEthernet2/0/0] mpls
    [*P2-GigabitEthernet2/0/0] mpls te
    [*P2-GigabitEthernet1/0/0] commit
    [~P2-GigabitEthernet2/0/0] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 4.4.4.9
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] quit
    [*PE2] interface gigabitethernet 1/0/0
    [*PE2-GigabitEthernet1/0/0] mpls
    [*PE2-GigabitEthernet1/0/0] mpls te
    [*PE2-GigabitEthernet1/0/0] commit
    [~PE2-GigabitEthernet1/0/0] quit

  5. Configure MPLS TE bandwidth attributes for links.

    # Configure the maximum reservable link bandwidth and BC0 for the outbound interfaces of each tunnel.

    # Configure PE1.

    [~PE1] interface gigabitethernet 1/0/0
    [~PE1-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*PE1-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    # Configure P1.

    [~P1] interface gigabitethernet 1/0/0
    [~P1-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*P1-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet 2/0/0
    [*P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
    [*P1-GigabitEthernet2/0/0] commit
    [~P1-GigabitEthernet2/0/0] quit

    # Configure P2.

    [~P2] interface gigabitethernet 1/0/0
    [~P2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*P2-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet 2/0/0
    [*P2-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*P2-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
    [*P2-GigabitEthernet2/0/0] commit
    [~P2-GigabitEthernet2/0/0] quit

    # Configure PE2.

    [~PE2] interface gigabitethernet 1/0/0
    [~PE2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [*PE2-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
    [*PE2-GigabitEthernet1/0/0] commit
    [~PE2-GigabitEthernet1/0/0] quit

  6. Configure QoS on each PE.

    # Configure multi-field classification and set a service class for each type of service packet on PE1.

    [~PE1] acl 2001
    [*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.0.255.255
    [*PE1-acl4-basic-2001] quit
    [*PE1] acl 2002
    [*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.0.255.255
    [*PE1-acl4-basic-2002] quit
    [*PE1] traffic classifier service1
    [*PE1-classifier-service1] if-match acl 2001
    [*PE1-classifier-service1] commit
    [~PE1-classifier-service1] quit
    [~PE1] traffic behavior behavior1
    [*PE1-behavior-behavior1] service-class af1 color green
    [*PE1-behavior-behavior1] commit
    [~PE1-behavior-behavior1] quit
    [~PE1] traffic classifier service2
    [*PE1-classifier-service2] if-match acl 2002
    [*PE1-classifier-service2] commit
    [~PE1-classifier-service2] quit
    [~PE1] traffic behavior behavior2
    [*PE1-behavior-behavior2] service-class af2 color green
    [*PE1-behavior-behavior2] commit
    [~PE1-behavior-behavior2] quit
    [~PE1] traffic policy policy1
    [*PE1-trafficpolicy-policy1] classifier service1 behavior behavior1
    [*PE1-trafficpolicy-policy1] classifier service2 behavior behavior2
    [*PE1-trafficpolicy-policy1] commit
    [~PE1-trafficpolicy-policy1] quit
    [~PE1] interface gigabitethernet 2/0/0
    [~PE1-GigabitEthernet2/0/0] traffic-policy policy1 inbound
    [*PE1-GigabitEthernet2/0/0] commit
    [~PE1-GigabitEthernet2/0/0] quit

  7. Enable SR and configure an explicit path.

    In this example, the explicit path is used to establish an SR-MPLS TE tunnel.

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-2
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 19000
    [*PE1-isis-1] commit
    [~PE1-isis-1] quit
    [~PE1] interface LoopBack 1
    [~PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] commit
    [~PE1-LoopBack1] quit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-2
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 19000
    [*P1-isis-1] commit
    [~P1-isis-1] quit
    [~P1] interface LoopBack 1
    [~P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] commit
    [~P1-LoopBack1] quit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-2
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 19000
    [*P2-isis-1] commit
    [~P2-isis-1] quit
    [~P2] interface LoopBack 1
    [~P2-LoopBack1] isis prefix-sid index 30
    [*P2-LoopBack1] commit
    [~P2-LoopBack1] quit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-2
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] segment-routing global-block 16000 19000
    [*PE2-isis-1] commit
    [~PE2-isis-1] quit
    [~PE2] interface LoopBack 1
    [~PE2-LoopBack1] isis prefix-sid index 40
    [*PE2-LoopBack1] commit
    [~PE2-LoopBack1] quit

    # Display the node SID of each node. The following example uses the command output on PE1.

    [~PE1] display segment-routing prefix mpls forwarding 
    
                       Segment Routing Prefix MPLS Forwarding Information
                 --------------------------------------------------------------
                 Role : I-Ingress, T-Transit, E-Egress, I&T-Ingress And Transit
    
    Prefix             Label      OutLabel   Interface         NextHop          Role  MPLSMtu   Mtu     State          
    -----------------------------------------------------------------------------------------------------------------
    1.1.1.9/32         16010      NULL       Loop1             127.0.0.1        E     ---       1500    Active          
    2.2.2.9/32         16020      3          GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    3.3.3.9/32         16030      16030      GE1/0/0           10.1.1.2         I&T   ---       1500    Active          
    4.4.4.9/32         16040      16040      GE1/0/0           10.1.1.2         I&T   ---       1500    Active
    
    Total information(s): 4          

    # Configure an explicit path from PE1 to PE2.

    [~PE1] explicit-path pe1_pe2
    [*PE1-explicit-path-pe1_pe2] next sid label 16020 type prefix
    [*PE1-explicit-path-pe1_pe2] next sid label 16030 type prefix
    [*PE1-explicit-path-pe1_pe2] next sid label 16040 type prefix
    [*PE1-explicit-path-pe1_pe2] commit
    [~PE1-explicit-path-pe1_pe2] quit

    # Configure an explicit path from PE2 to PE1.

    [~PE2] explicit-path pe2_pe1
    [*PE2-explicit-path-pe2_pe1] next sid label 16030 type prefix
    [*PE2-explicit-path-pe2_pe1] next sid label 16020 type prefix
    [*PE2-explicit-path-pe2_pe1] next sid label 16010 type prefix
    [*PE2-explicit-path-pe2_pe1] commit
    [~PE2-explicit-path-pe2_pe1] quit

  8. Configure MPLS TE tunnel interfaces.

    # On the ingress of each tunnel, create a tunnel interface and set the IP address, tunnel protocol, destination IP address, tunnel ID, dynamic signaling protocol, tunnel bandwidth, and service classes for packets transmitted on the tunnel.

    Run the mpls te service-class { service-class & <1-8> | default } command to set a service class for packets carried by each tunnel.

    # Configure PE1.

    [~PE1] interface Tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface loopback 1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 4.4.4.9
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te bandwidth ct0 20000
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te path explicit-path pe1_pe2
    [*PE1-Tunnel1] mpls te service-class af1
    [*PE1-Tunnel1] commit
    [~PE1-Tunnel1] quit
    [~PE1] interface Tunnel2
    [*PE1-Tunnel2] ip address unnumbered interface loopback 1
    [*PE1-Tunnel2] tunnel-protocol mpls te
    [*PE1-Tunnel2] destination 4.4.4.9
    [*PE1-Tunnel2] mpls te tunnel-id 2
    [*PE1-Tunnel2] mpls te bandwidth ct0 20000
    [*PE1-Tunnel2] mpls te signal-protocol segment-routing
    [*PE1-Tunnel2] mpls te path explicit-path pe1_pe2
    [*PE1-Tunnel2] mpls te service-class af2
    [*PE1-Tunnel2] commit
    [~PE1-Tunnel2] quit
    [~PE1] interface Tunnel3
    [*PE1-Tunnel3] ip address unnumbered interface loopback 1
    [*PE1-Tunnel3] tunnel-protocol mpls te
    [*PE1-Tunnel3] destination 4.4.4.9
    [*PE1-Tunnel3] mpls te tunnel-id 3
    [*PE1-Tunnel3] mpls te bandwidth ct0 20000
    [*PE1-Tunnel3] mpls te signal-protocol segment-routing
    [*PE1-Tunnel3] mpls te path explicit-path pe1_pe2
    [*PE1-Tunnel3] mpls te service-class default
    [*PE1-Tunnel3] commit
    [~PE1-Tunnel3] quit
    [~PE1] tunnel-policy policy1
    [*PE1-tunnel-policy-policy1] tunnel select-seq sr-te load-balance-number 3
    [*PE1-tunnel-policy-policy1] commit
    [~PE1-tunnel-policy-policy1] quit

    # Configure PE2.

    [~PE2] interface Tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface loopback 1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.9
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te bandwidth ct0 20000
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te path explicit-path pe2_pe1
    [*PE2-Tunnel1] commit
    [~PE2-Tunnel1] quit
    [~PE2] tunnel-policy policy1
    [*PE2-tunnel-policy-policy1] tunnel select-seq sr-te load-balance-number 3
    [*PE2-tunnel-policy-policy1] commit
    [~PE2-tunnel-policy-policy1] quit

  9. Configure L3VPN access on each PE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv4-family
    [*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy policy1
    [*PE1-vpn-instance-vpn1-af-ipv4] commit
    [~PE1-vpn-instance-vpn1-af-ipv4] quit
    [~PE1-vpn-instance-vpn1] quit
    [~PE1] interface gigabitethernet 2/0/0
    [~PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
    [*PE1-GigabitEthernet2/0/0] ip address 10.10.1.1 24
    [*PE1-GigabitEthernet2/0/0] commit
    [~PE1-GigabitEthernet2/0/0] quit

    # Configure PE2.

    [~PE2] ip vpn-instance vpn2
    [*PE2-vpn-instance-vpn2] ipv4-family
    [*PE2-vpn-instance-vpn2-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpn2-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpn1-af-ipv4] tnl-policy policy1
    [*PE2-vpn-instance-vpn2-af-ipv4] commit
    [~PE2-vpn-instance-vpn2-af-ipv4] quit
    [~PE2-vpn-instance-vpn2] quit
    [~PE2] interface gigabitethernet 2/0/0
    [~PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpn2
    [*PE2-GigabitEthernet2/0/0] ip address 10.11.1.1 24
    [*PE2-GigabitEthernet2/0/0] commit
    [~PE2-GigabitEthernet2/0/0] quit

  10. Establish an EBGP peer relationship between each PE and its connected CE.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface gigabitethernet1/0/0
    [~CE1-GigabitEthernet1/0/0] ip address 10.10.1.2 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.10.1.1 as-number 100
    [*CE1-bgp] import-route direct
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see Configuration Files in this section.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpn1
    [*PE1-bgp-vpn1] peer 10.10.1.2 as-number 65410
    [*PE1-bgp-vpn1] commit
    [~PE1-bgp-vpn1] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

  11. Verify the configuration.

    # Run the ping command on PE1 to check the connectivity of each SR-MPLS TE tunnel. For example:

    [~PE1] ping lsp segment-routing te Tunnel 1
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100  data bytes, press CTRL_C to break
        Reply from 4.4.4.9: bytes=100 Sequence=1 time=17 ms
        Reply from 4.4.4.9: bytes=100 Sequence=2 time=19 ms
        Reply from 4.4.4.9: bytes=100 Sequence=3 time=16 ms
        Reply from 4.4.4.9: bytes=100 Sequence=4 time=13 ms
        Reply from 4.4.4.9: bytes=100 Sequence=5 time=20 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 13/17/20 ms
    [~PE1] ping lsp segment-routing te Tunnel 2
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 : 100  data bytes, press CTRL_C to break
        Reply from 4.4.4.9: bytes=100 Sequence=1 time=20 ms
        Reply from 4.4.4.9: bytes=100 Sequence=2 time=18 ms
        Reply from 4.4.4.9: bytes=100 Sequence=3 time=14 ms
        Reply from 4.4.4.9: bytes=100 Sequence=4 time=20 ms
        Reply from 4.4.4.9: bytes=100 Sequence=5 time=21 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 14/18/21 ms
    [~PE1] ping lsp segment-routing te Tunnel 3
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel3 : 100  data bytes, press CTRL_C to break
        Reply from 4.4.4.9: bytes=100 Sequence=1 time=14 ms
        Reply from 4.4.4.9: bytes=100 Sequence=2 time=16 ms
        Reply from 4.4.4.9: bytes=100 Sequence=3 time=13 ms
        Reply from 4.4.4.9: bytes=100 Sequence=4 time=12 ms
        Reply from 4.4.4.9: bytes=100 Sequence=5 time=16 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel3 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 12/14/16 ms

Configuration Files

  • PE1 configuration file
    #
    sysname PE1
    #
    ip vpn-instance vpn1
     ipv4-family 
      route-distinguisher 100:1
      tnl-policy policy1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #
    mpls
     mpls te
    #
    explicit-path pe1_pe2
     next sid label 16020 type prefix index 1
     next sid label 16030 type prefix index 2
     next sid label 16040 type prefix index 3
    #
    acl number 2001
     rule 10 permit source 10.40.0.0 0.0.255.255
    #
    acl number 2002
     rule 20 permit source 10.50.0.0 0.0.255.255
    #
    traffic classifier service1 operator or
     if-match acl 2001
    #
    traffic classifier service2 operator or 
     if-match acl 2002
    #
    traffic behavior behavior1
     service-class af1 color green
    #
    traffic behavior behavior2
     service-class af2 color green
    #
    traffic policy policy1
     classifier service1 behavior behavior1 precedence 1
     classifier service2 behavior behavior2 precedence 2 
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.0005.0000.0000.0001.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip binding vpn-instance vpn1
     ip address 10.10.1.1 255.255.255.0
     traffic-policy policy1 inbound
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1
     isis prefix-sid index 10
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.9
     mpls te signal-protocol segment-routing
     mpls te bandwidth ct0 20000
     mpls te tunnel-id 1
     mpls te path explicit-path pe1_pe2
     mpls te service-class af1
    #
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.9
     mpls te signal-protocol segment-routing
     mpls te bandwidth ct0 20000
     mpls te tunnel-id 2
     mpls te path explicit-path pe1_pe2
     mpls te service-class af2
    #
    interface Tunnel3
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 4.4.4.9
     mpls te signal-protocol segment-routing
     mpls te bandwidth ct0 20000
     mpls te tunnel-id 3
     mpls te path explicit-path pe1_pe2
     mpls te service-class default
    #
    bgp 100
     peer 4.4.4.9 as-number 100
     peer 4.4.4.9 connect-interface LoopBack1
     #
     ipv4-family unicast 
      undo synchronization 
      peer 4.4.4.9 enable
     #
     ipv4-family vpnv4 
      policy vpn-target 
      peer 4.4.4.9 enable
     #
     ipv4-family vpn-instance vpn1
      peer 10.10.1.2 as-number 65410
    #
    tunnel-policy policy1
     tunnel select-seq sr-te load-balance-number 3
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.0005.0000.0000.0002.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1
     isis prefix-sid index 20
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.0005.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.3.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1
     isis prefix-sid index 30
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpn2
     ipv4-family 
      route-distinguisher 200:1
      tnl-policy policy1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 4.4.4.9
    #
    mpls
     mpls te
    #
    explicit-path pe2_pe1
     next sid label 16030 type prefix index 1
     next sid label 16020 type prefix index 2
     next sid label 16010 type prefix index 3
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 00.0005.0000.0000.0004.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 100000
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip binding vpn-instance vpn2
     ip address 10.11.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1
     isis prefix-sid index 40
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te bandwidth ct0 20000
     mpls te tunnel-id 1
     mpls te path explicit-path pe2_pe1
    #
    bgp 100
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #
     ipv4-family unicast 
      undo synchronization 
      peer 1.1.1.9 enable
     #
     ipv4-family vpnv4 
      policy vpn-target 
      peer 1.1.1.9 enable
     #
     ipv4-family vpn-instance vpn2
      peer 10.11.1.2 as-number 65420
    #
    tunnel-policy policy1
     tunnel select-seq sr-te load-balance-number 3
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.10.1.2 255.255.255.0
    #
    bgp 65410
     peer 10.10.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      import-route direct
      peer 10.10.1.1 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.11.1.2 255.255.255.0
    #
    bgp 65420
     peer 10.11.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      import-route direct
      peer 10.11.1.1 enable
    #
    return

Example for Configuring CBTS in an L3VPN over LDP over SR-MPLS TE Scenario

This section provides an example for configuring class-of-service-based tunnel selection (CBTS) in an L3VPN over LDP over SR-MPLS TE scenario.

Networking Requirements

On the network shown in Figure 1-2679, CE1 and CE2 belong to the same L3VPN. They access the public network through PE1 and PE2, respectively. Various types of services are transmitted between CE1 and CE2. Transmitting a large number of common services deteriorates the efficiency of transmitting important services. To prevent this problem, configure the CBTS function. This function allows traffic of a specific service class to be transmitted along a specified tunnel.

This example requires Tunnel1 to transmit important services and Tunnel2 to transmit other services.

If the CBTS function is configured, you are advised not to configure any of the following functions:

  • Mixed load balancing
  • Seamless MPLS
  • Dynamic load balancing
Figure 1-2679 CBTS networking in an L3VPN over LDP over SR-MPLS TE scenario

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Precautions

The destination IP address of a tunnel must be equal to the LSR ID of the egress of the tunnel.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address and its mask to each interface and configure a loopback interface address as an LSR ID on each node. In addition, configure an IGP to advertise routes.

  2. Create an SR-MPLS TE tunnel in each TE-capable area, and specify a service class for packets that can be transmitted by the tunnel.

  3. Enable MPLS LDP in the TE-incapable area, and configure a remote LDP peer for each edge node in the TE-capable area.

  4. Configure the TE forwarding adjacency function.

  5. Configure multi-field classification on nodes connected to the L3VPN as well as behavior aggregate classification on LDP over SR-MPLS TE links.

Data Preparation

To complete the configuration, you need the following data:

  • IS-IS process ID, level, network entity ID, and cost type

  • Policy that is used for triggering LSP establishment

  • Name and IP address of each remote LDP peer of P1 and P2

  • Link bandwidth attributes of each tunnel

  • Tunnel interface number, IP address, destination IP address, tunnel ID, tunnel signaling protocol, tunnel bandwidth, TE metric value, and link cost on P1 and P2

  • Multi-field classifier name and traffic policy name

Procedure

  1. Assign an IP address and its mask to each interface.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.1 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.2 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.4 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.4.1.2 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

    # Configure P3.

    <HUAWEI> system-view
    [~HUAWEI] sysname P3
    [*HUAWEI] commit
    [~P3] interface loopback 1
    [*P3-LoopBack1] ip address 3.3.3.3 32
    [*P3-LoopBack1] quit
    [*P3] interface gigabitethernet1/0/0
    [*P3-GigabitEthernet1/0/0] ip address 10.2.1.2 24
    [*P3-GigabitEthernet1/0/0] quit
    [*P3] interface gigabitethernet2/0/0
    [*P3-GigabitEthernet2/0/0] ip address 10.3.1.1 24
    [*P3-GigabitEthernet2/0/0] quit
    [*P3] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 5.5.5.5 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.4.1.1 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

  2. Enable IS-IS to advertise the route of the network segment connected to each interface and the host route destined for each LSR ID.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] is-level level-2
    [*PE1-isis-1] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] commit
    [~PE1-LoopBack1] quit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] is-level level-2
    [*P1-isis-1] quit
    [*P1] interface gigabitethernet 1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet 2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] commit
    [~P1-LoopBack1] quit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] network-entity 10.0000.0000.0003.00
    [*P2-isis-1] is-level level-2
    [*P2-isis-1] quit
    [*P2] interface gigabitethernet 1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet 2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] commit
    [~P2-LoopBack1] quit

    # Configure P3.

    [~P3] isis 1
    [*P3-isis-1] cost-style wide
    [*P3-isis-1] network-entity 10.0000.0000.0004.00
    [*P3-isis-1] is-level level-2
    [*P3-isis-1] quit
    [*P3] interface gigabitethernet 1/0/0
    [*P3-GigabitEthernet1/0/0] isis enable 1
    [*P3-GigabitEthernet1/0/0] quit
    [*P3] interface gigabitethernet 2/0/0
    [*P3-GigabitEthernet2/0/0] isis enable 1
    [*P3-GigabitEthernet2/0/0] quit
    [*P3] interface loopback 1
    [*P3-LoopBack1] isis enable 1
    [*P3-LoopBack1] commit
    [~P3-LoopBack1] quit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] network-entity 10.0000.0000.0005.00
    [*PE2-isis-1] is-level level-2
    [*PE2-isis-1] quit
    [*PE2] interface gigabitethernet 1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] commit
    [~PE2-LoopBack1] quit

    After completing the preceding configurations, run the display ip routing-table command on each node to check that both PEs have learned routes from each other.

  3. Establish an MP-IBGP peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 5.5.5.5 as-number 100
    [*PE1-bgp] peer 5.5.5.5 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 5.5.5.5 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.1 as-number 100
    [*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.1 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on the PEs to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      5.5.5.5         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.1
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      5.5.5.5         4   100   12      18         0     00:09:38   Established   0

  4. Configure basic MPLS functions, and enable LDP between PE1 and P1 and between PE2 and P2.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.1
    [*PE1] mpls
    [*PE1-mpls] lsp-trigger all
    [*PE1-mpls] quit
    [*PE1] mpls ldp
    [*PE1-mpls-ldp] quit
    [*PE1] interface gigabitethernet 1/0/0
    [*PE1-GigabitEthernet1/0/0] mpls
    [*PE1-GigabitEthernet1/0/0] mpls ldp
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.2
    [*P1] mpls
    [*P1-mpls] mpls te
    [*P1-mpls] lsp-trigger all
    [*P1-mpls] quit
    [*P1] mpls ldp
    [*P1-mpls-ldp] quit
    [*P1] interface gigabitethernet 1/0/0
    [*P1-GigabitEthernet1/0/0] mpls
    [*P1-GigabitEthernet1/0/0] mpls ldp
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet 2/0/0
    [*P1-GigabitEthernet2/0/0] mpls
    [*P1-GigabitEthernet2/0/0] mpls te
    [*P1-GigabitEthernet2/0/0] commit
    [~P1-GigabitEthernet2/0/0] quit

    # Configure P3.

    [~P3] mpls lsr-id 3.3.3.3
    [*P3] mpls
    [*P3-mpls] mpls te
    [*P3-mpls] quit
    [*P3] interface gigabitethernet 1/0/0
    [*P3-GigabitEthernet1/0/0] mpls
    [*P3-GigabitEthernet1/0/0] mpls te
    [*P3-GigabitEthernet1/0/0] quit
    [*P3] interface gigabitethernet 2/0/0
    [*P3-GigabitEthernet2/0/0] mpls
    [*P3-GigabitEthernet2/0/0] mpls te
    [*P3-GigabitEthernet2/0/0] commit
    [~P3-GigabitEthernet2/0/0] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.4
    [*P2] mpls
    [*P2-mpls] mpls te
    [*P2-mpls] lsp-trigger all
    [*P2-mpls] quit
    [*P2] mpls ldp
    [*P2-mpls-ldp] quit
    [*P2] interface gigabitethernet 1/0/0
    [*P2-GigabitEthernet1/0/0] mpls
    [*P2-GigabitEthernet1/0/0] mpls te
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet 2/0/0
    [*P2-GigabitEthernet2/0/0] mpls
    [*P2-GigabitEthernet2/0/0] mpls ldp
    [*P2-GigabitEthernet2/0/0] commit
    [~P2-GigabitEthernet2/0/0] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 5.5.5.5
    [*PE2] mpls
    [*PE2-mpls] lsp-trigger all
    [*PE2-mpls] quit
    [*PE2] mpls ldp
    [*PE2-mpls-ldp] quit
    [*PE2] interface gigabitethernet 1/0/0
    [*PE2-GigabitEthernet1/0/0] mpls
    [*PE2-GigabitEthernet1/0/0] mpls ldp
    [*PE2-GigabitEthernet1/0/0] commit
    [~PE2-GigabitEthernet1/0/0] quit

    After you complete the preceding configurations, local LDP sessions are successfully established between PE1 and P1 and between PE2 and P2.

    # Run the display mpls ldp session command on PE1, P1, P2, or PE2 to view LDP session information.

    [~PE1] display mpls ldp session
     LDP Session(s) in Public Network
     Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
     An asterisk (*) before a session means the session is being deleted.
    --------------------------------------------------------------------------
     PeerID             Status      LAM  SsnRole  SsnAge       KASent/Rcv
    --------------------------------------------------------------------------
     2.2.2.2:0          Operational DU   Passive  0000:00:05   23/23
    --------------------------------------------------------------------------
    TOTAL: 1 Session(s) Found.

    # Run the display mpls ldp peer command to view LDP peer information. The following example uses the command output on PE1.

    [~PE1] display mpls ldp peer
     LDP Peer Information in Public network
     An asterisk (*) before a peer means the peer is being deleted.
     -------------------------------------------------------------------------
     PeerID                 TransportAddress   DiscoverySource
     -------------------------------------------------------------------------
     2.2.2.2:0              2.2.2.2            GigabitEthernet1/0/0       
     -------------------------------------------------------------------------
    TOTAL: 1 Peer(s) Found.

    # Run the display mpls lsp command to view LDP LSP information. The following example uses the command output on PE1.

    [~PE1] display mpls lsp
    Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP 
    Flag after LDP FRR: (L) - Logic FRR LSP
    ----------------------------------------------------------------------
                     LSP Information: LDP LSP
    ----------------------------------------------------------------------
    FEC                In/Out Label  In/Out IF                      Vrf Name
    1.1.1.1/32         3/NULL        GE1/0/0/-
    2.2.2.2/32         NULL/3        -/GE1/0/0
    2.2.2.2/32         1024/3        -/GE1/0/0
    10.1.1.0/24        3/NUL         GE1/0/0/-
    10.2.1.0/24        NULL/3        -/GE1/0/0
    10.2.1.0/24        1025/3        -/GE1/0/0

  5. Set up a remote LDP session between P1 and P2.

    # Configure P1.

    [~P1] mpls ldp remote-peer lsrd
    [*P1-mpls-ldp-remote-lsrd] remote-ip 4.4.4.4
    [*P1-mpls-ldp-remote-lsrd] commit
    [~P1-mpls-ldp-remote-lsrd] quit

    # Configure P2.

    [~P2] mpls ldp remote-peer lsrb
    [*P2-mpls-ldp-remote-lsrb] remote-ip 2.2.2.2
    [*P2-mpls-ldp-remote-lsrb] commit
    [~P2-mpls-ldp-remote-lsrb] quit

    After you complete the preceding configurations, a remote LDP session is set up between P1 and P2. Run the display mpls ldp remote-peer command on P1 or P2 to view information about the remote session entity. The following example uses the command output on P1.

    [~P1] display mpls ldp remote-peer lsrd
                             LDP Remote Entity Information
     ------------------------------------------------------------------------------
     Remote Peer Name  : lsrd
     Description       : ----
     Remote Peer IP    : 4.4.4.4              LDP ID        : 2.2.2.2:0
     Transport Address : 2.2.2.2              Entity Status : Active
    
     Configured Keepalive Hold Timer : 45 Sec
     Configured Keepalive Send Timer : ----
     Configured Hello Hold Timer     : 45 Sec
     Negotiated Hello Hold Timer     : 45 Sec
     Configured Hello Send Timer     : ----
     Configured Delay Timer          : ----
     Hello Packet sent/received      : 425/382
     Label Advertisement Mode        : Downstream Unsolicited
     Auto-config                     : ---- 
     Manual-config                   : effective
     Session-Protect effect          : NO 
     Session-Protect Duration        : ---- 
     Session-Protect Remain          : ----
     ------------------------------------------------------------------------------
     TOTAL: 1 Remote-Peer(s) Found.

  6. Configure bandwidth attributes for the outbound interface of each TE tunnel.

    # Configure P1.

    [~P1] interface gigabitethernet 2/0/0
    [~P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
    [*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
    [*P1-GigabitEthernet2/0/0] commit
    [~P1-GigabitEthernet2/0/0] quit

    # Configure P3.

    [~P3] interface gigabitethernet 1/0/0
    [~P3-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
    [*P3-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
    [*P3-GigabitEthernet1/0/0] quit
    [*P3] interface gigabitethernet 2/0/0
    [*P3-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
    [*P3-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
    [*P3-GigabitEthernet2/0/0] commit
    [~P3-GigabitEthernet2/0/0] quit

    # Configure P2.

    [~P2] interface gigabitethernet 1/0/0
    [~P2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
    [*P2-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
    [*P2-GigabitEthernet1/0/0] commit
    [~P2-GigabitEthernet1/0/0] quit

  7. Configure L3VPN access on PE1 and PE2 and multi-field classification on the inbound interface of PE1.

    # Configure PE1.

    [~PE1] ip vpn-instance VPNA
    [*PE1-vpn-instance-VPNA] ipv4-family
    [*PE1-vpn-instance-VPNA-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-VPNA-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-VPNA-af-ipv4] quit
    [*PE1-vpn-instance-VPNA] quit
    [*PE1] interface gigabitethernet 2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance VPNA
    [*PE1-GigabitEthernet2/0/0] ip address 10.10.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] acl 2001
    [*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.0.255.255
    [*PE1-acl4-basic-2001] quit
    [*PE1] acl 2002
    [*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.0.255.255
    [*PE1-acl4-basic-2002] quit
    [*PE1] traffic classifier service1 operator or 
    [*PE1-classifier-service1] if-match acl 2001
    [*PE1-classifier-service1] commit
    [~PE1-classifier-service1] quit
    [~PE1] traffic behavior behavior1
    [*PE1-behavior-behavior1] service-class af1 color green
    [*PE1-behavior-behavior1] commit
    [~PE1-behavior-behavior1] quit
    [~PE1] traffic classifier service2 operator or
    [*PE1-classifier-service2] if-match acl 2002
    [*PE1-classifier-service2] commit
    [~PE1-classifier-service2] quit
    [~PE1] traffic behavior behavior2
    [*PE1-behavior-behavior2] service-class af2 color green
    [*PE1-behavior-behavior2] commit
    [~PE1-behavior-behavior2] quit
    [~PE1] traffic policy test
    [*PE1-trafficpolicy-test] classifier service1 behavior behavior1 precedence 1 
    [*PE1-trafficpolicy-test] classifier service2 behavior behavior2 precedence 2
    [*PE1-trafficpolicy-test] commit
    [~PE1-trafficpolicy-test] quit
    [~PE1] interface gigabitethernet 2/0/0
    [~PE1-GigabitEthernet2/0/0] traffic-policy test inbound
    [*PE1-GigabitEthernet2/0/0] commit
    [~PE1-GigabitEthernet2/0/0] quit

    # Configure PE2.

    [~PE2] ip vpn-instance VPNB
    [*PE2-vpn-instance-VPNB] ipv4-family
    [*PE2-vpn-instance-VPNB-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-VPNB-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-VPNB-af-ipv4] quit
    [*PE2-vpn-instance-VPNB] quit
    [*PE2] interface gigabitethernet 2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance VPNB
    [*PE2-GigabitEthernet2/0/0] ip address 10.11.1.1 24
    [*PE2-GigabitEthernet2/0/0] commit
    [~PE2-GigabitEthernet2/0/0] quit

  8. Configure behavior aggregate classification on interfaces connecting PE1 and P1.

    # Configure PE1.

    [~PE1] interface gigabitethernet 1/0/0
    [~PE1-GigabitEthernet1/0/0] trust upstream default
    [*PE1-GigabitEthernet1/0/0] commit
    [~PE1-GigabitEthernet1/0/0] quit

    # Configure P1.

    [~P1] interface gigabitethernet 1/0/0
    [~P1-GigabitEthernet1/0/0] trust upstream default
    [*P1-GigabitEthernet1/0/0] commit
    [~P1-GigabitEthernet1/0/0] quit

  9. Enable SR and configure an explicit path.

    In this example, the explicit path is used to establish an SR-MPLS TE tunnel.

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] traffic-eng level-2
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 19000
    [*P1-isis-1] commit
    [~P1-isis-1] quit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] traffic-eng level-2
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 19000
    [*P2-isis-1] commit
    [~P2-isis-1] quit

    # Configure P3.

    [~P3] segment-routing
    [*P3-segment-routing] quit
    [*P3] isis 1
    [*P3-isis-1] traffic-eng level-2
    [*P3-isis-1] segment-routing mpls
    [*P3-isis-1] segment-routing global-block 16000 19000
    [*P3-isis-1] commit
    [~P3-isis-1] quit

    # Display the adjacency SID of each node.

    [~P1] display segment-routing adjacency mpls forwarding 
    
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48091     GE1/0/0           10.1.1.1         ISIS-V4     ---       1500      _public_  
    48092     GE2/0/0           10.2.1.2         ISIS-V4     ---       1500      _public_          
    
    Total information(s): 2
    [~P3] display segment-routing adjacency mpls forwarding
    
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48090     GE1/0/0           10.2.1.1         ISIS-V4     ---       1500      _public_      
    48091     GE2/0/0           10.3.1.2         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 2
    [~P2] display segment-routing adjacency mpls forwarding
    
                Segment Routing Adjacency MPLS Forwarding Information
    
    Label     Interface         NextHop          Type        MPLSMtu   Mtu       VPN-Name       
    -------------------------------------------------------------------------------------
    48090     GE2/0/0           10.4.1.1         ISIS-V4     ---       1500      _public_      
    48091     GE1/0/0           10.3.1.1         ISIS-V4     ---       1500      _public_      
    
    Total information(s): 2

    # Configure an explicit path from P1 to P2.

    [~P1] explicit-path p1_p2
    [*P1-explicit-path-p1_p2] next sid label 48092 type adjacency
    [*P1-explicit-path-p1_p2] next sid label 48091 type adjacency
    [*P1-explicit-path-p1_p2] commit
    [~P1-explicit-path-p1_p2] quit

    # Configure an explicit path from P2 to P1.

    [~P2] explicit-path p2_p1
    [*P2-explicit-path-p2_p1] next sid label 48091 type adjacency
    [*P2-explicit-path-p2_p1] next sid label 48090 type adjacency
    [*P2-explicit-path-p2_p1] commit
    [~P2-explicit-path-p2_p1] quit

  10. Configure TE tunnels from P1 to P2 and set a service class for each type of packet that can be transmitted by the tunnels.

    Run the mpls te service-class { service-class & <1-8> | default } command to set a service class for packets that are allowed to pass through tunnels.

    # On P1, enable the IGP shortcut function on each tunnel interface and adjust the metric value of the forwarding adjacency function to ensure that traffic destined for P2 or PE2 passes through the corresponding tunnel.

    [~P1] interface Tunnel1
    [*P1-Tunnel1] ip address unnumbered interface LoopBack1
    [*P1-Tunnel1] tunnel-protocol mpls te
    [*P1-Tunnel1] destination 4.4.4.4
    [*P1-Tunnel1] mpls te tunnel-id 100
    [*P1-Tunnel1] mpls te bandwidth ct0 10000
    [*P1-Tunnel1] mpls te igp shortcut
    [*P1-Tunnel1] mpls te igp metric absolute 1
    [*P1-Tunnel1] isis enable 1
    [*P1-Tunnel1] mpls te signal-protocol segment-routing
    [*P1-Tunnel1] mpls te path explicit-path p1_p2
    [*P1-Tunnel1] mpls te service-class af1 af2
    [*P1-Tunnel1] quit
    [*P1] interface Tunnel2
    [*P1-Tunnel2] ip address unnumbered interface LoopBack1
    [*P1-Tunnel2] tunnel-protocol mpls te
    [*P1-Tunnel2] destination 4.4.4.4
    [*P1-Tunnel2] mpls te tunnel-id 200
    [*P1-Tunnel2] mpls te bandwidth ct0 10000
    [*P1-Tunnel2] mpls te igp shortcut
    [*P1-Tunnel2] mpls te igp metric absolute 1
    [*P1-Tunnel2] isis enable 1
    [*P1-Tunnel2] mpls te signal-protocol segment-routing
    [*P1-Tunnel2] mpls te path explicit-path p1_p2
    [*P1-Tunnel2] mpls te service-class default
    [*P1-Tunnel2] quit
    [*P1] commit

  11. Configure a tunnel from P2 to P1.

    # On P2, enable the forwarding adjacency function on each tunnel interface and adjust the metric value of the forwarding adjacency function to ensure that traffic destined for PE1 or P1 passes through the tunnel.

    [~P2] interface Tunnel1
    [*P2-Tunnel1] ip address unnumbered interface LoopBack1
    [*P2-Tunnel1] tunnel-protocol mpls te
    [*P2-Tunnel1] destination 2.2.2.2
    [*P2-Tunnel1] mpls te tunnel-id 101
    [*P2-Tunnel1] mpls te bandwidth ct0 10000
    [*P2-Tunnel1] mpls te igp shortcut
    [*P2-Tunnel1] mpls te igp metric absolute 1
    [*P2-Tunnel1] isis enable 1
    [*P2-Tunnel1] mpls te signal-protocol segment-routing
    [*P2-Tunnel1] mpls te path explicit-path p2_p1
    [*P2-Tunnel1] quit
    [*P2] commit

  12. Establish an EBGP peer relationship between each PE and its connected CE.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface gigabitethernet1/0/0
    [~CE1-GigabitEthernet1/0/0] ip address 10.10.1.2 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.10.1.1 as-number 100
    [*CE1-bgp] import-route direct
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see Configuration Files in this section.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] ipv4-family vpn-instance VPNA
    [*PE1-bgp-VPNA] peer 10.10.1.2 as-number 65410
    [*PE1-bgp-VPNA] commit
    [~PE1-bgp-VPNA] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

  13. Verify the configuration.

    # Run the ping command on P1 to check the connectivity of each SR-MPLS TE tunnel. For example:

    [~P1] ping lsp segment-routing te Tunnel 1
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100  data bytes, press CTRL_C to break
        Reply from 4.4.4.4: bytes=100 Sequence=1 time=20 ms
        Reply from 4.4.4.4: bytes=100 Sequence=2 time=16 ms
        Reply from 4.4.4.4: bytes=100 Sequence=3 time=16 ms
        Reply from 4.4.4.4: bytes=100 Sequence=4 time=12 ms
        Reply from 4.4.4.4: bytes=100 Sequence=5 time=12 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 12/15/20 ms
    [~P1] ping lsp segment-routing te Tunnel 2
      LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 : 100  data bytes, press CTRL_C to break
        Reply from 4.4.4.4: bytes=100 Sequence=1 time=17 ms
        Reply from 4.4.4.4: bytes=100 Sequence=2 time=17 ms
        Reply from 4.4.4.4: bytes=100 Sequence=3 time=12 ms
        Reply from 4.4.4.4: bytes=100 Sequence=4 time=12 ms
        Reply from 4.4.4.4: bytes=100 Sequence=5 time=14 ms
    
      --- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 12/14/17 ms

    # Check information about LDP LSP establishment on each PE. The following example uses the command output on PE1.

    [~PE1] display mpls lsp
    Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
    Flag after LDP FRR: (L) - Logic FRR LSP 
    -------------------------------------------------------------------------------
                     LSP Information: LDP LSP
    -------------------------------------------------------------------------------
    FEC                In/Out Label    In/Out IF                      Vrf Name
    1.1.1.1/32         3/NULL          -/-                            
    2.2.2.2/32         NULL/3          -/GE2/0/0                     
    2.2.2.2/32         48151/3         -/GE2/0/0                     
    3.3.3.3/32         NULL/48150      -/GE2/0/0                     
    3.3.3.3/32         48152/48150     -/GE2/0/0                     
    4.4.4.4/32         NULL/48153      -/GE2/0/0                     
    4.4.4.4/32         48155/48153     -/GE2/0/0                     
    5.5.5.5/32         NULL/48155      -/GE2/0/0                     
    5.5.5.5/32         48157/48155     -/GE2/0/0                     
    10.1.1.0/24        3/NULL          -/-                            
    10.2.1.0/24        NULL/3          -/GE2/0/0                     
    10.2.1.0/24        48153/3         -/GE2/0/0                     
    10.3.1.0/24        NULL/48151      -/GE2/0/0                     
    10.3.1.0/24        48154/48151     -/GE2/0/0                     
    10.4.1.0/24        NULL/48154      -/GE2/0/0                     
    10.4.1.0/24        48156/48154     -/GE2/0/0                     
    -------------------------------------------------------------------------------
                     LSP Information: BGP LSP
    -------------------------------------------------------------------------------
    FEC                In/Out Label    In/Out IF                      Vrf Name
    -/32               48090/NULL      -/-                             VPNA

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance VPNA
     ipv4-family 
      route-distinguisher 100:1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.1
    #
    mpls
     lsp-trigger all
    #
    mpls ldp
     #              
     ipv4-family
    #
    acl number 2001
     rule 10 permit source 10.40.0.0 0.0.255.255
    #
    acl number 2002
     rule 20 permit source 10.50.0.0 0.0.255.255
    #
    traffic classifier service1 operator or
     if-match acl 2001
    #
    traffic classifier service2 operator or 
     if-match acl 2002
    #
    traffic behavior behavior1
     service-class af1 color green
    #
    traffic behavior behavior2
     service-class af2 color green
    #
    traffic policy test
     share-mode
     classifier service1 behavior behavior1 precedence 1 
     classifier service2 behavior behavior2 precedence 2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0001.00
    #
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
     trust upstream default
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip binding vpn-instance VPNA
     ip address 10.10.1.1 255.255.255.0
     traffic-policy test inbound
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 5.5.5.5 as-number 100
     peer 5.5.5.5 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization 
      peer 5.5.5.5 enable
     #
     ipv4-family vpnv4 
      policy vpn-target 
      peer 5.5.5.5 enable
     #
     ipv4-family vpn-instance VPNA
      peer 10.10.1.2 as-number 65410
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.2
    #
    mpls
     mpls te
     lsp-trigger all
    #
    explicit-path p1_p2
     next sid label 48092 type adjacency index 1
     next sid label 48091 type adjacency index 2
    #
    mpls ldp
     #
     ipv4-family
    #
    mpls ldp remote-peer lsrd
     remote-ip 4.4.4.4
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
     trust upstream default
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip address 10.2.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 20000
     mpls te bandwidth bc0 20000
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     isis enable 1  
     destination 4.4.4.4
     mpls te signal-protocol segment-routing
     mpls te igp shortcut
     mpls te igp metric absolute 1
     mpls te bandwidth ct0 10000
     mpls te tunnel-id 100
     mpls te path explicit-path p1_p2
     mpls te service-class af1 af2
    #               
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     isis enable 1            
     destination 4.4.4.4
     mpls te signal-protocol segment-routing
     mpls te igp shortcut
     mpls te igp metric absolute 1
     mpls te bandwidth ct0 10000
     mpls te tunnel-id 200
     mpls te path explicit-path p1_p2
     mpls te service-class default
    #
    return
  • P3 configuration file

    #
    sysname P3
    #
    mpls lsr-id 3.3.3.3
    #
    mpls
     mpls te
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.2.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 20000
     mpls te bandwidth bc0 20000
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip address 10.3.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 20000
     mpls te bandwidth bc0 20000
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.4
    #
    mpls
     mpls te
     lsp-trigger all
    #
    explicit-path p2_p1
     next sid label 48091 type adjacency index 1
     next sid label 48090 type adjacency index 2
    #
    mpls ldp
     #
     ipv4-family
    #
    mpls ldp remote-peer lsrb
     remote-ip 2.2.2.2
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-2
     segment-routing mpls
     segment-routing global-block 16000 19000
    #
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.3.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 20000
     mpls te bandwidth bc0 20000
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip address 10.4.1.2 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     isis enable 1
     destination 2.2.2.2
     mpls te signal-protocol segment-routing
     mpls te igp shortcut
     mpls te igp metric absolute 1
     mpls te bandwidth ct0 10000
     mpls te tunnel-id 101
     mpls te path explicit-path p2_p1
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance VPNB
     ipv4-family 
      route-distinguisher 200:1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 5.5.5.5
    #
    mpls
     lsp-trigger all
    #
    mpls ldp
    #
    segment-routing
    #
    isis 1
     is-level level-2
     cost-style wide
     network-entity 10.0000.0000.0005.00
    #
    interface GigabitEthernet1/0/0
     undo shutdown 
     ip address 10.4.1.1 255.255.255.0
     isis enable 1
     mpls
     mpls ldp
    #
    interface GigabitEthernet2/0/0
     undo shutdown 
     ip binding vpn-instance VPNB
     ip address 10.11.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 5.5.5.5 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast 
      undo synchronization
      peer 1.1.1.1 enable
     #
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.1 enable
     #
     ipv4-family vpn-instance VPNB
      peer 10.11.1.2 as-number 65420
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.10.1.2 255.255.255.0
    #
    bgp 65410
     peer 10.10.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      import-route direct
      peer 10.10.1.1 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.11.1.2 255.255.255.0
    #
    bgp 65420
     peer 10.11.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      import-route direct
      peer 10.11.1.1 enable
    #
    return

Example for Configuring DSCP-Based IP Service Recursion to SR-MPLS TE Tunnels

This section provides an example for configuring IP service recursion to SR-MPLS TE tunnels based on the differentiated services code point (DSCP) values of IP packets.

Networking Requirements

In SR-MPLS TE scenarios, you can configure the class-of-service-based tunnel selection (CBTS) function to allow traffic of a specific service class to be transmitted along a specified tunnel. However, with the increase of service classes, the CBTS function is gradually incapable of meeting service requirements. Similar to the CBTS function, DSCP-based IP service recursion steers IPv4 or IPv6 packets into SR-MPLS TE tunnels based on the DSCP values of the packets.

On the network shown in Figure 1-2680:
  • CE1 and CE2 belong to vpna. The VPN target used by vpna is 111:1.

  • Two SR-MPLS TE tunnels (tunnel 1 and tunnel 2) are created between PE1 and PE2.

There are multiple types of services between CE1 and CE2. To ensure the forwarding quality of important services, you can configure the services to recurse to SR-MPLS TE tunnels based on the DSCP values of IP packets.

This example requires Tunnel1 to transmit services with DSCP values ranging from 0 to 10 and Tunnel2 to transmit other services by default.

Figure 1-2680 Networking for DSCP-based IP service recursion to SR-MPLS TE tunnels

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

If a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations, such as IP address and routing protocol configurations, on this interface are automatically deleted. If needed, these items need to be reconfigured.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS on the backbone network and configure SR and static adjacency labels.

  3. Configure SR-MPLS TE tunnels between the PEs.

  4. Establish an MP-IBGP peer relationship between the PEs.

  5. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the instance to the PE interface connecting the PE to a CE.

  6. Establish an EBGP peer relationship between each pair of a CE and a PE.
  7. Configure a tunnel selection policy on each PE and set the number of tunnels participating in load balancing to 2, indicating that Tunnel1 and Tunnel2 balance service traffic.

  8. On each PE, configure tunnel 1 to transmit services with DSCP values ranging from 0 to 10 and tunnel 2 to transmit other services.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • VPN target and RD of vpna

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. The following example uses IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] mpls te
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] mpls te
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] mpls te
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] mpls te
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure SR-MPLS TE tunnels.

    # Configure PE1.

    [~PE1] explicit-path path1
    [*PE1-explicit-path-path1] next sid label 330000 type adjacency
    [*PE1-explicit-path-path1] next sid label 330002 type adjacency
    [*PE1-explicit-path-path1] quit
    [*PE1] explicit-path path2
    [*PE1-explicit-path-path2] next sid label 330001 type adjacency
    [*PE1-explicit-path-path2] next sid label 330003 type adjacency
    [*PE1-explicit-path-path2] quit
    [*PE1] interface Tunnel1
    [*PE1-Tunnel1] ip address unnumbered interface LoopBack1
    [*PE1-Tunnel1] tunnel-protocol mpls te
    [*PE1-Tunnel1] destination 3.3.3.9
    [*PE1-Tunnel1] mpls te signal-protocol segment-routing
    [*PE1-Tunnel1] mpls te tunnel-id 1
    [*PE1-Tunnel1] mpls te path explicit-path path1
    [*PE1-Tunnel1] quit
    [*PE1] interface Tunnel2
    [*PE1-Tunnel2] ip address unnumbered interface LoopBack1
    [*PE1-Tunnel2] tunnel-protocol mpls te
    [*PE1-Tunnel2] destination 3.3.3.9
    [*PE1-Tunnel2] mpls te signal-protocol segment-routing
    [*PE1-Tunnel2] mpls te tunnel-id 2
    [*PE1-Tunnel2] mpls te path explicit-path path2
    [*PE1-Tunnel2] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] explicit-path path1
    [*PE2-explicit-path-path1] next sid label 330000 type adjacency
    [*PE2-explicit-path-path1] next sid label 330003 type adjacency
    [*PE2-explicit-path-path1] quit
    [*PE2] explicit-path path2
    [*PE2-explicit-path-path2] next sid label 330001 type adjacency
    [*PE2-explicit-path-path2] next sid label 330002 type adjacency
    [*PE2-explicit-path-path2] quit
    [*PE2] interface Tunnel1
    [*PE2-Tunnel1] ip address unnumbered interface LoopBack1
    [*PE2-Tunnel1] tunnel-protocol mpls te
    [*PE2-Tunnel1] destination 1.1.1.9
    [*PE2-Tunnel1] mpls te signal-protocol segment-routing
    [*PE2-Tunnel1] mpls te tunnel-id 1
    [*PE2-Tunnel1] mpls te path explicit-path path1
    [*PE2-Tunnel1] quit
    [*PE2] interface Tunnel2
    [*PE2-Tunnel2] ip address unnumbered interface LoopBack1
    [*PE2-Tunnel2] tunnel-protocol mpls te
    [*PE2-Tunnel2] destination 1.1.1.9
    [*PE2-Tunnel2] mpls te signal-protocol segment-routing
    [*PE2-Tunnel2] mpls te tunnel-id 2
    [*PE2-Tunnel2] mpls te path explicit-path path2
    [*PE2-Tunnel2] quit
    [*PE2] commit

  6. Establish an MP-IBGP peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on the PEs and check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  7. Configure a VPN instance on each PE, enable the IPv4 address family for the instance, and bind the instance to the PE interface connecting the PE to a CE.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  8. Configure a tunnel policy on each PE, so that SR-MPLS TE tunnels are prioritized.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 2 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 2 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  9. Establish an EBGP peer relationship between each PE and its connected CE.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see the configuration file.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see the configuration file.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. The command output shows that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      10.1.1.1        4       65410       91       90     0 01:15:39 Established        1

    Run the display ip routing-table vpn-instance command on each PE. The command output shows the routes to CE loopback interfaces.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
        10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
        10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         Tunnel1
          127.0.0.0/8   Direct  0    0       D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the display ip routing-table vpn-instance vpna verbose command on each PE. The command output shows details about the routes to CE loopback interfaces.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna 10.22.2.2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route                                              
    ------------------------------------------------------------------------------                                                      
    Routing Table : vpna                                                                                                                
    Summary Count : 1                                                                                                                   
    
    Destination: 10.22.2.2/32                                                                                                            
         Protocol: IBGP               Process ID: 0                                                                                     
       Preference: 255                      Cost: 0                                                                                     
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9                                                                               
            State: Active Adv Relied         Age: 00h49m58s                                                                             
              Tag: 0                    Priority: low                                                                                   
            Label: 48120                 QoSInfo: 0x0                                                                                   
       IndirectID: 0x10000DA            Instance:                                                                                       
     RelayNextHop: 0.0.0.0             Interface: Tunnel1                                                                               
         TunnelID: 0x000000000300000001 Flags: RD 

    The preceding command output shows that the corresponding VPN route has successfully recursed to an SR-MPLS TE tunnel.

    Run the ping command. The command output shows that CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms 

  10. Configure the SR-MPLS TE tunnels to transmit services with different DSCP values.

    # Configure PE1.

    [*PE1] interface Tunnel1
    [*PE1-Tunnel1] match dscp ipv4 0 to 10
    [*PE1-Tunnel1] quit
    [*PE1] interface Tunnel2
    [*PE1-Tunnel2] match dscp ipv4 default
    [*PE1-Tunnel2] quit
    [*PE1] commit

    # Configure PE2.

    [*PE2] interface Tunnel1
    [*PE2-Tunnel1] match dscp ipv4 0 to 10
    [*PE2-Tunnel1] quit
    [*PE2] interface Tunnel2
    [*PE2-Tunnel2] match dscp ipv4 default
    [*PE2-Tunnel2] quit
    [*PE2] commit

    IP packets have a default DSCP value. You can also configure multi-field classification to configure desired DSCP values for the IP packets. After the configuration is complete, Tunnel1 transmits services with DSCP values ranging from 0 to 10, and Tunnel2 transmits other services by default.

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls 
     mpls te 
    #                                                                               
    explicit-path path1                                                             
     next sid label 330000 type adjacency index 1                                          
     next sid label 330002 type adjacency index 2                                          
    #                                                                               
    explicit-path path2                                                             
     next sid label 330001 type adjacency index 1                                           
     next sid label 330003 type adjacency index 2           
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path1
     match dscp ipv4 0 to 10
    #
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 2
     mpls te path explicit-path path2
     match dscp ipv4 default
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #               
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 2 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls    
     mpls te         
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls  
     mpls te  
    #                                                                               
    explicit-path path1                                                             
     next sid label 330000 type adjacency index 1                                          
     next sid label 330003 type adjacency index 2                                          
    #                                                                               
    explicit-path path2                                                             
     next sid label 330001 type adjacency index 1                                          
     next sid label 330002 type adjacency index 2 
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #
    interface Tunnel1
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 1
     mpls te path explicit-path path1
     match dscp ipv4 0 to 10
    #
    interface Tunnel2
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te signal-protocol segment-routing
     mpls te tunnel-id 2
     mpls te path explicit-path path2
     match dscp ipv4 default
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #               
    tunnel-policy p1
     tunnel select-seq sr-te load-balance-number 2 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls
     mpls te            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return

Configuration Examples for SR-MPLS TE Policy

This section provides several configuration examples of SR-MPLS TE Policy.

Example for Configuring L3VPN Routes to Be Recursed to Manually Configured SR-MPLS TE Policies (Color-Based)

This section provides an example for configuring L3VPN routes to be recursed to manually configured SR-MPLS TE Policies based on the Color Extended Community to ensure secure communication between users of the same VPN.

Networking Requirements

On the network shown in Figure 1-2681:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

Configure L3VPN routes to be recursed to SR-MPLS TE Policies to ensure secure communication between users of the same VPN. Because multiple links exist between PEs on the public network, other links must be able to provide protection for the primary link.

Figure 1-2681 L3VPN route recursion to manually configured SR-MPLS TE Policies

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3 configurations, such as the IP address and routing protocol configuration, on the interface will be deleted. Reconfigure them if needed.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network for the PEs to communicate.

  2. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.

  4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.

  5. Apply an import or export route-policy to a specified VPNv4 peer on each PE, and set the Color Extended Community. In this example, an import route-policy with the Color Extended Community is applied.

  6. Establish an MP-IBGP peer relationship between PEs for them to exchange routing information.

  7. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

  8. Configure a tunnel selection policy on each PE.

  9. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN target and RD of vpna

Procedure

  1. Configure interface IP addresses for each device on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. In this example, the IGP is IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure an SR-MPLS TE Policy on each PE.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list pe2
    [*PE2-segment-routing-segment-list-pe2] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-pe2] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-pe2] quit
    [*PE2-segment-routing] segment-list pe2backup
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 115
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2backup
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 200
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

    After completing the configuration, run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2020-05-18 11:23:39
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 2                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    Candidate-path Preference: 100
    Path State           : Inactive (Valid)     Path Type            : -
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : -
    GroupId              : 1                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 194                  XcIndex              : -
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure a route-policy.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish an MP-IBGP peer relationship between PEs, apply an import route-policy to a specified VPNv4 peer on each PE, and set the Color Extended Community.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color200 import
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After completing the configuration, run the display bgp peer or display bgp vpnv4 all peer command on each PE. The following example uses the command output on PE1. The command output shows that a BGP peer relationship has been established between the PEs and is in the Established state.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  9. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  10. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy to be preferentially selected.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  11. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see "Configuration Files" in this section.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see "Configuration Files" in this section.

    After completing the configuration, run the display bgp vpnv4 vpn-instance peer command on each PE. The following example uses the peer relationship between PE1 and CE1. The command output shows that a BGP peer relationship has been established between the PE and CE and is in the Established state.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      10.1.1.1        4       65410       91       90     0 01:15:39 Established        1

  12. Verify the configuration.

    After completing the configuration, run the display ip routing-table vpn-instance command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
        10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
        10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         policy100
          127.0.0.0/8   Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the display ip routing-table vpn-instance vpna verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna 10.22.2.2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpna
    Summary Count : 1
    
    Destination: 10.22.2.2/32         
         Protocol: IBGP               Process ID: 0              
       Preference: 255                      Cost: 0              
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9
            State: Active Adv Relied         Age: 01h18m38s           
              Tag: 0                    Priority: low            
            Label: 48180                 QoSInfo: 0x0           
       IndirectID: 0x10000B9            Instance:                                 
     RelayNextHop: 0.0.0.0             Interface: policy100
         TunnelID: 0x000000003200000041 Flags: RD

    The command output shows that the VPN route has been successfully recursed to the specified SR-MPLS TE Policy.

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000   
      candidate-path preference 200
       segment-list pe1
      candidate-path preference 100
       segment-list pe1backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 import
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe2
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list pe2backup
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      binding-sid 115
      mtu 1000 
      candidate-path preference 200
       segment-list pe2
      candidate-path preference 100
       segment-list pe2backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color200 import
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return

Example for Configuring L3VPN Routes to Be Recursed to Manually Configured SR-MPLS TE Policies (DSCP-Based)

This section provides an example for configuring L3VPN routes to be recursed to manually configured SR-MPLS TE Policies based on the DSCP value to ensure secure communication between users of the same VPN.

Networking Requirements

On the network shown in Figure 1-2682:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

Configure L3VPN routes to be recursed to SR-MPLS TE Policies to ensure secure communication between users of the same VPN. Because multiple links exist between PEs on the public network, other links must be able to provide protection for the primary link.

Figure 1-2682 L3VPN route recursion to manually configured SR-MPLS TE Policies

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3 configurations, such as the IP address and routing protocol configuration, on the interface will be deleted. Reconfigure them if needed.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network for the PEs to communicate.

  2. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.

  4. Establish an MP-IBGP peer relationship between PEs for them to exchange routing information.

  5. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

  6. Configure an SR-MPLS TE Policy group on each PE and define mappings between the color and DSCP values.

  7. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy group to be preferentially selected.

  8. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN target and RD of vpna

Procedure

  1. Configure interface IP addresses for each device on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. In this example, the IGP is IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure SR-MPLS TE Policies.

    # Configure PE1. Deploy two SR-MPLS TE Policies (policy100 and policy200) from PE1 to PE2 to carry traffic with different DSCP values.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] diffserv-mode pipe af1 green
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] sr-te policy policy200 endpoint 3.3.3.9 color 200
    [*PE1-segment-routing-te-policy-policy200] binding-sid 125
    [*PE1-segment-routing-te-policy-policy200] mtu 1000
    [*PE1-segment-routing-te-policy-policy200] diffserv-mode pipe cs7 red
    [*PE1-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy200-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy200-path] quit
    [*PE1-segment-routing-te-policy-policy200] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2. Deploy two SR-MPLS TE Policies (policy100 and policy200) from PE2 to PE1 to carry traffic with different DSCP values.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list pe2
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] segment-list pe2backup
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] sr-te policy policy100 endpoint 1.1.1.9 color 100
    [*PE2-segment-routing-te-policy-policy100] binding-sid 115
    [*PE2-segment-routing-te-policy-policy100] mtu 1000
    [*PE2-segment-routing-te-policy-policy100] diffserv-mode pipe af1 green
    [*PE2-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy100-path] segment-list pe2
    [*PE2-segment-routing-te-policy-policy100-path] quit
    [*PE2-segment-routing-te-policy-policy100] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 125
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] diffserv-mode pipe cs7 red
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2backup
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

    After the configuration is complete, run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy policy-name policy100
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2020-04-27 09:23:13
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : Pipe, AF1, Green
    Active IGP Metric    : -
    Candidate-path Count : 1                    
    
    Candidate-path Preference: 100
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : 115
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002

  6. Establish an MP-IBGP peer relationship between PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After completing the configuration, run the display bgp peer or display bgp vpnv4 all peer command on each PE. The following example uses the command output on PE1. The command output shows that a BGP peer relationship has been established between the PEs and is in the Established state.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  7. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  8. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy group to be preferentially selected.

    Configure packets to enter different SR-MPLS TE Policies in the SR-MPLS TE Policy group based on DSCP values. In this example, packets carrying the DSCP value 20 enter policy100 with the color value 100, and those carrying the DSCP value 40 enter policy200 with the color value 200.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] sr-te-policy group 1
    [*PE1-segment-routing-te-policy-group-1] endpoint 3.3.3.9
    [*PE1-segment-routing-te-policy-group-1] color 100 match dscp ipv4 20
    [*PE1-segment-routing-te-policy-group-1] color 200 match dscp ipv4 40
    [*PE1-segment-routing-te-policy-group-1] quit
    [*PE1-segment-routing] quit
    [*PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel binding destination 3.3.3.9 sr-te-policy group 1
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] sr-te-policy group 1
    [*PE2-segment-routing-te-policy-group-1] endpoint 1.1.1.9
    [*PE2-segment-routing-te-policy-group-1] color 100 match dscp ipv4 20
    [*PE2-segment-routing-te-policy-group-1] color 200 match dscp ipv4 40
    [*PE2-segment-routing-te-policy-group-1] quit
    [*PE2-segment-routing] quit
    [*PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel binding destination 1.1.1.9 sr-te-policy group 1
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

    After the configuration is complete, the headend implements recursion based on the destination address of packets and finds the associated SR-MPLS TE Policy group. According to the DSCP value of the packets, the headend finds the matching color value and then the SR-MPLS TE Policy configured with this color value in the SR-MPLS TE Policy group, thereby achieving DSCP-based traffic steering.

    Run the display sr-te policy group 1 command on each PE to check the SR-MPLS TE Policy group status. PE1 is used as an example.

    [~PE1] display sr-te policy group 1
                        SR-TE Policy Group Information
    -------------------------------------------------------------------------------
    GroupID   : 1                     GroupState  : UP
    GTunnelID : 67                    GTunnelType : SR-TE Policy Group
    Endpoint  : 3.3.3.9               UP/ALL Num  : 1/1
    -------------------------------------------------------------------------------
    TunnelId   AfType   Color    State    Dscp
    -------------------------------------------------------------------------------
    65         IPV4     100      UP       20
    66         IPV4     200      UP       40
    -------------------------------------------------------------------------------

  9. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see "Configuration Files" in this section.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see "Configuration Files" in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      10.1.1.1        4       65410       91       90     0 01:15:39 Established        1

  10. Verify the configuration.

    After completing the configuration, run the display ip routing-table vpn-instance vpn-instance-name command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
        10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
        10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         SR-TE Policy Group
          127.0.0.0/8   Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the display ip routing-table vpn-instance vpna verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna 10.22.2.2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpna
    Summary Count : 1
    
    Destination: 10.22.2.2/32         
         Protocol: IBGP               Process ID: 0              
       Preference: 255                      Cost: 0              
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9
            State: Active Adv Relied         Age: 01h18m38s           
              Tag: 0                    Priority: low            
            Label: 48180                 QoSInfo: 0x0           
       IndirectID: 0x10000B9            Instance:                                 
     RelayNextHop: 0.0.0.0             Interface: SR-TE Policy Group
         TunnelID: 0x000000003300000041 Flags: RD

    The command output shows that the VPN route has been successfully recursed to the specified SR-MPLS TE Policy.

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te-policy group 1
      endpoint 3.3.3.9
      color 100 match dscp ipv4 20
      color 200 match dscp ipv4 40
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      diffserv-mode pipe af1 green
      binding-sid 115
      mtu 1000 
      candidate-path preference 100
       segment-list pe1
     sr-te policy policy200 endpoint 3.3.3.9 color 200
      diffserv-mode pipe cs7 red
      binding-sid 125
      mtu 1000
      candidate-path preference 100
       segment-list pe1backup
    #
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
    #               
    tunnel-policy p1
     tunnel binding destination 3.3.3.9 sr-te-policy group 1
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     segment-list pe2
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list pe2backup
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te-policy group 1
      endpoint 1.1.1.9
      color 100 match dscp ipv4 20
      color 200 match dscp ipv4 40
     sr-te policy policy100 endpoint 1.1.1.9 color 100
      diffserv-mode pipe af1 green
      binding-sid 115
      mtu 1000  
      candidate-path preference 100
       segment-list pe2
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      diffserv-mode pipe cs7 red
      binding-sid 125
      mtu 1000
      candidate-path preference 100
       segment-list pe2backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
    #               
    tunnel-policy p1
     tunnel binding destination 1.1.1.9 sr-te-policy group 1
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return

Example for Configuring L3VPN Routes to Be Recursed to Dynamically Delivered SR-MPLS TE Policies (Color-Based)

This section provides an example for configuring L3VPN routes to be recursed to dynamically delivered SR-MPLS TE Policies based on the Color Extended Community to ensure secure communication between users of the same VPN.

Networking Requirements

On the network shown in Figure 1-2683:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

Configure L3VPN routes to be recursed to SR-MPLS TE Policies to ensure secure communication between users of the same VPN. Because multiple links exist between PEs on the public network, other links must be able to provide protection for the primary link.

Manual SR-MPLS TE Policy configuration is complex and cannot adapt to dynamic network changes. In this example, a controller is deployed. Therefore, the controller is used to dynamically deliver SR-MPLS TE Policies. Because the controller is capable of detecting adjacency SID changes through BGP-LS, it is recommended that an IGP be used to dynamically generate adjacency SIDs.

Figure 1-2683 L3VPN route recursion to dynamically delivered SR-MPLS TE Policies

Interfaces 1 through 4 in this example represent GE 1/0/1, GE 1/0/2, GE 1/0/3, and GE 1/0/4, respectively.


Configuration Notes

If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3 configurations, such as the IP address and routing protocol configuration, on the interface will be deleted. Reconfigure them if needed.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS and SR for each device on the backbone network. In addition, configure IS-IS SR and use IS-IS to dynamically generate adjacency SIDs and advertise the SIDs to neighbors.

  3. Establish a BGP IPv4 SR-Policy address family peer relationship between each PE and the controller, and configure the PEs to report network topology and label information to the controller through BGP-LS. After completing path computation, the controller delivers SR-MPLS TE Policy routes to the PEs through BGP.

  4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.

  5. Apply an import or export route-policy to a specified VPNv4 peer on each PE, and set the Color Extended Community. In this example, an import route-policy with the Color Extended Community is applied.

  6. Establish an MP-IBGP peer relationship between PEs for them to exchange routing information.

  7. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

  8. Configure a tunnel selection policy on each PE.

  9. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN target and RD of vpna

  • SRGB ranges on PEs and Ps

Procedure

  1. Configure interface IP addresses for each device on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/1
    [*PE1-GigabitEthernet1/0/1] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/1] quit
    [*PE1] interface gigabitethernet1/0/3
    [*PE1-GigabitEthernet1/0/3] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet1/0/3] quit
    [*PE1] interface gigabitethernet1/0/4
    [*PE1-GigabitEthernet1/0/4] ip address 10.3.1.1 24
    [*PE1-GigabitEthernet1/0/4] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/1
    [*P1-GigabitEthernet1/0/1] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/1] quit
    [*P1] interface gigabitethernet1/0/2
    [*P1-GigabitEthernet1/0/2] ip address 10.12.1.1 24
    [*P1-GigabitEthernet1/0/2] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/1
    [*PE2-GigabitEthernet1/0/1] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/1] quit
    [*PE2] interface gigabitethernet1/0/3
    [*PE2-GigabitEthernet1/0/3] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet1/0/3] quit
    [*PE2] interface gigabitethernet1/0/4
    [*PE2-GigabitEthernet1/0/4] ip address 10.4.1.1 24
    [*PE2-GigabitEthernet1/0/4] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/1
    [*P2-GigabitEthernet1/0/1] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/1] quit
    [*P2] interface gigabitethernet1/0/2
    [*P2-GigabitEthernet1/0/2] ip address 10.14.1.1 24
    [*P2-GigabitEthernet1/0/2] quit
    [*P2] commit

    # Configure a controller.

    <HUAWEI> system-view
    [~HUAWEI] sysname Controller
    [*HUAWEI] commit
    [~Controller] interface gigabitethernet1/0/1
    [~Controller-GigabitEthernet1/0/1] ip address 10.3.1.2 24
    [*Controller-GigabitEthernet1/0/1] quit
    [*Controller] interface gigabitethernet1/0/2
    [*Controller-GigabitEthernet1/0/2] ip address 10.4.1.2 24
    [*Controller-GigabitEthernet1/0/2] quit
    [*Controller] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/1
    [*PE1-GigabitEthernet1/0/1] isis enable 1
    [*PE1-GigabitEthernet1/0/1] quit
    [*PE1] interface gigabitethernet1/0/3
    [*PE1-GigabitEthernet1/0/3] isis enable 1
    [*PE1-GigabitEthernet1/0/3] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/1
    [*P1-GigabitEthernet1/0/1] isis enable 1
    [*P1-GigabitEthernet1/0/1] quit
    [*P1] interface gigabitethernet1/0/2
    [*P1-GigabitEthernet1/0/2] isis enable 1
    [*P1-GigabitEthernet1/0/2] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/3
    [*PE2-GigabitEthernet1/0/3] isis enable 1
    [*PE2-GigabitEthernet1/0/3] quit
    [*PE2] interface gigabitethernet1/0/1
    [*PE2-GigabitEthernet1/0/1] isis enable 1
    [*PE2-GigabitEthernet1/0/1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/1
    [*P2-GigabitEthernet1/0/1] isis enable 1
    [*P2-GigabitEthernet1/0/1] quit
    [*P2] interface gigabitethernet1/0/2
    [*P2-GigabitEthernet1/0/2] isis enable 1
    [*P2-GigabitEthernet1/0/2] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The configuration in this example is for reference only.

    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] quit
    [*P2] commit

  5. Configure BGP-LS and BGP IPv4 SR-Policy address family peer relationships.

    To allow a PE to report topology information to a controller through BGP-LS, you must enable IS-IS-based topology advertisement on the PE. Generally, in an IGP domain, only one device requires this function to be enabled. In this example, this function is enabled on both PE1 and PE2 to improve reliability.

    # Enable IS-IS SR-MPLS TE on PE1.

    [~PE1] isis 1
    [~PE1-isis-1] bgp-ls enable level-1
    [*PE1-isis-1] commit
    [~PE1-isis-1] quit

    # Enable IS-IS SR-MPLS TE on PE2.

    [~PE2] isis 1
    [~PE2-isis-1] bgp-ls enable level-1
    [*PE2-isis-1] commit
    [~PE2-isis-1] quit

    # Configure BGP-LS and BGP IPv4 SR-Policy address family peer relationships on PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 10.3.1.2 as-number 100
    [*PE1-bgp] link-state-family unicast
    [*PE1-bgp-af-ls] peer 10.3.1.2 enable
    [*PE1-bgp-af-ls] quit
    [*PE1-bgp] ipv4-family sr-policy
    [*PE1-bgp-af-ipv4-srpolicy] peer 10.3.1.2 enable
    [*PE1-bgp-af-ipv4-srpolicy] commit
    [~PE1-bgp-af-ipv4-srpolicy] quit
    [~PE1-bgp] quit

    # Configure BGP-LS and BGP IPv4 SR-Policy address family peer relationships on PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 10.4.1.2 as-number 100
    [*PE2-bgp] link-state-family unicast
    [*PE2-bgp-af-ls] peer 10.4.1.2 enable
    [*PE2-bgp-af-ls] quit
    [*PE2-bgp] ipv4-family sr-policy
    [*PE2-bgp-af-ipv4-srpolicy] peer 10.4.1.2 enable
    [*PE2-bgp-af-ipv4-srpolicy] commit
    [~PE2-bgp-af-ipv4-srpolicy] quit
    [~PE2-bgp] quit

    # Configure BGP-LS and BGP IPv4 SR-Policy address family peer relationships on the controller.

    [~Controller] bgp 100
    [*Controller-bgp] peer 10.3.1.1 as-number 100
    [*Controller-bgp] peer 10.4.1.1 as-number 100
    [*Controller-bgp] link-state-family unicast
    [*Controller-bgp-af-ls] peer 10.3.1.1 enable
    [*Controller-bgp-af-ls] peer 10.4.1.1 enable
    [*Controller-bgp-af-ls] quit
    [*Controller-bgp] ipv4-family sr-policy
    [*Controller-bgp-af-ipv4-srpolicy] peer 10.3.1.1 enable
    [*Controller-bgp-af-ipv4-srpolicy] peer 10.4.1.1 enable
    [*Controller-bgp-af-ipv4-srpolicy] commit
    [~Controller-bgp-af-ipv4-srpolicy] quit
    [~Controller-bgp] quit

    After the configuration is complete, the PEs can receive the SR-MPLS TE Policy routes delivered by the controller and then generate SR-MPLS TE Policies. You can run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2019-11-18 10:08:24
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 2                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : BGP(20)              Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    Candidate-path Preference: 100
    Path State           : Inactive (Valid)     Path Type            : -
    Protocol-Origin      : BGP(20)              Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : -
    GroupId              : 1                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 194                  XcIndex              : -
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure a route-policy.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish an MP-IBGP peer relationship between the PEs, apply import route-policies to the VPNv4 peers, and set the color extended community for routes.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv4-family vpnv4
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp-af-vpnv4] commit
    [~PE1-bgp-af-vpnv4] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv4-family vpnv4
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color200 import
    [*PE2-bgp-af-vpnv4] commit
    [~PE2-bgp-af-vpnv4] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv4 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv4 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  9. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet1/0/2
    [*PE1-GigabitEthernet1/0/2] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet1/0/2] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet1/0/2] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet1/0/2
    [*PE2-GigabitEthernet1/0/2] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet1/0/2] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet1/0/2] quit
    [*PE2] commit

  10. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy to be preferentially selected.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  11. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/1
    [*CE1-GigabitEthernet1/0/1] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/1] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see "Configuration Files" in this section.

    After the configuration is complete, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can successfully ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see "Configuration Files" in this section.

    After the configuration is complete, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the command output on PE1 to show that a BGP peer relationship has been established between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      10.1.1.1        4       65410       91       90     0 01:15:39 Established        1

  12. Verify the configuration.

    After completing the configuration, run the display ip routing-table vpn-instance command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet1/0/2
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet1/0/2
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet1/0/2
        10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet1/0/2
        10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         policy100
          127.0.0.0/8   Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the display ip routing-table vpn-instance vpna verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna 10.22.2.2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpna
    Summary Count : 1
    
    Destination: 10.22.2.2/32         
         Protocol: IBGP               Process ID: 0              
       Preference: 255                      Cost: 0              
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9
            State: Active Adv Relied         Age: 01h18m38s           
              Tag: 0                    Priority: low            
            Label: 48180                 QoSInfo: 0x0           
       IndirectID: 0x10000B9            Instance:                                 
     RelayNextHop: 0.0.0.0             Interface: policy100
         TunnelID: 0x000000003200000041 Flags: RD

    The command output shows that the VPN route has been successfully recursed to the specified SR-MPLS TE Policy.

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
    #               
    isis 1          
     is-level level-1
     cost-style wide
     bgp-ls enable level-1
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/1
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/2
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet1/0/3
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/4
     undo shutdown  
     ip address 10.3.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     peer 10.3.1.2 as-number 100
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
      peer 10.3.1.2 enable
     #
     link-state-family unicast
      peer 10.3.1.2 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 import
     #              
     ipv4-family vpn-instance vpna
      peer 10.1.1.1 as-number 65410
     #
     ipv4-family sr-policy
      peer 10.3.1.2 enable
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/1
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/2
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
    #               
    isis 1          
     is-level level-1
     cost-style wide
     bgp-ls enable level-1
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/1
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/2
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet1/0/3
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/4
     undo shutdown  
     ip address 10.4.1.1 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     peer 10.4.1.2 as-number 100 
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
      peer 10.4.1.2 enable
     #
     link-state-family unicast
      peer 10.4.1.2 enable
     #              
     ipv4-family vpnv4
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color200 import
     #              
     ipv4-family vpn-instance vpna
      peer 10.2.1.1 as-number 65420
     #
     ipv4-family sr-policy
      peer 10.4.1.2 enable
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/1
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet1/0/2
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return
  • Controller configuration file

    #
    sysname Controller
    #
    interface GigabitEthernet1/0/1
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
    #
    interface GigabitEthernet1/0/2
     undo shutdown
     ip address 10.4.1.2 255.255.255.0
    #
    bgp 100
     peer 10.3.1.1 as-number 100
     peer 10.4.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      peer 10.3.1.1 enable
      peer 10.4.1.1 enable
     #
     link-state-family unicast
      peer 10.3.1.1 enable
      peer 10.4.1.1 enable
     #
     ipv4-family sr-policy
      peer 10.3.1.1 enable
      peer 10.4.1.1 enable
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/1
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/1
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return

Example for Configuring L3VPNv6 Routes to Be Recursed to Manually Configured SR-MPLS TE Policies

This section provides an example for configuring L3VPNv6 routes to be recursed to manually configured SR Policies to ensure secure communication between users of the same VPN.

Networking Requirements

On the network shown in Figure 1-2684:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

To ensure secure communication between CE1 and CE2, configure L3VPNv6 routes to be recursed to SR-MPLS TE Policies. Because multiple links exist between PEs on the public network, other links must be able to provide protection for the primary link.

Figure 1-2684 L3VPNv6 route recursion to manually configured SR-MPLS TE Policies

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3 configurations, such as the IPv6 address and routing protocol configuration, on the interface will be deleted. Reconfigure them if needed.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.

  4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.

  5. Apply an import or export route-policy to a specified VPNv4 peer on each PE, and set the Color Extended Community. In this example, an import route-policy with the Color Extended Community is applied.

  6. Establish an MP-IBGP peer relationship between PEs for them to exchange routing information.

  7. Create a VPN instance and enable the IPv6 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

  8. Configure a tunnel selection policy on each PE.

  9. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN target and RD of vpna

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. The following example uses IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure SR-MPLS TE Policies.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list pe2
    [*PE2-segment-routing-segment-list-pe2] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-pe2] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-pe2] quit
    [*PE2-segment-routing] segment-list pe2backup
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 115
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2backup
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 200
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

    After the configuration is complete, run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2019-12-20 09:17:45
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 2                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    Candidate-path Preference: 100
    Path State           : Inactive (Valid)     Path Type            : -
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : -
    GroupId              : 1                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 194                  XcIndex              : -
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure a route-policy.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish an MP-IBGP peer relationship between the PEs, apply import route-policies to the VPNv6 peers, and set the color extended community for routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv6-family vpnv6
    [*PE1-bgp-af-vpnv6] peer 3.3.3.9 enable
    [*PE1-bgp-af-vpnv6] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp-af-vpnv6] commit
    [~PE1-bgp-af-vpnv6] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv6-family vpnv6
    [*PE2-bgp-af-vpnv6] peer 1.1.1.9 enable
    [*PE2-bgp-af-vpnv6] peer 1.1.1.9 route-policy color200 import
    [*PE2-bgp-af-vpnv6] commit
    [~PE2-bgp-af-vpnv6] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer or display bgp vpnv6 all peer command on each PE to check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0
    [~PE1] display bgp vpnv6 all peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  9. Create a VPN instance and enable the IPv6 address family on each PE. Then, bind each PE's interface connecting the PE to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv6-family
    [*PE1-vpn-instance-vpna-af-ipv6] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv6] vpn-target 111:1 both
    [*PE1-vpn-instance-vpna-af-ipv6] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ipv6 enable
    [*PE1-GigabitEthernet2/0/0] ipv6 address 2001:db8::1:2 96
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv6-family
    [*PE2-vpn-instance-vpna-af-ipv6] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv6] vpn-target 111:1 both
    [*PE2-vpn-instance-vpna-af-ipv6] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ipv6 enable
    [*PE2-GigabitEthernet2/0/0] ipv6 address 2001:db8::2:2 96
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  10. Configure a tunnel policy on each PE to preferentially select an SR-MPLS TE Policy.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv6-family
    [*PE1-vpn-instance-vpna-af-ipv6] tnl-policy p1
    [*PE1-vpn-instance-vpna-af-ipv6] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv6-family
    [*PE2-vpn-instance-vpna-af-ipv6] tnl-policy p1
    [*PE2-vpn-instance-vpna-af-ipv6] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  11. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ipv6 enable
    [*CE1-LoopBack1] ipv6 address 2001:db8::11:1 128
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ipv6 enable
    [*CE1-GigabitEthernet1/0/0] ipv6 address 2001:db8::1:1 96
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] router-id 10.10.10.10
    [*CE1-bgp] peer 2001:db8::1:2 as-number 100
    [*CE1-bgp] ipv6-family unicast
    [*CE1-bgp-af-ipv6] network 2001:db8::11:1 128
    [*CE1-bgp-af-ipv6] peer 2001:db8::1:2 enable
    [*CE1-bgp-af-ipv6] quit
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see "Configuration Files" in this section.

    After completing the configuration, run the display ip vpn-instance verbose command on each PE to check VPN instance configurations. The command output shows that each PE can ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ipv6-address parameter to specify a source IP address when running the ping ipv6 vpn-instance vpn-instance-name -a source-ipv6-address dest-ipv6-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv6-family vpn-instance vpna
    [*PE1-bgp-6-vpna] peer 2001:db8::1:1 as-number 65410
    [*PE1-bgp-6-vpna] commit
    [~PE1-bgp-6-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see "Configuration Files" in this section.

    After completing the configuration, run the display bgp vpnv6 vpn-instance peer command on each PE. The following example uses the peer relationship between PE1 and CE1. The command output shows that a BGP peer relationship has been established between the PE and CE and is in the Established state.

    [~PE1] display bgp vpnv6 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      VPN-Instance vpna, Router ID 1.1.1.9:
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      2001:DB8::1:1   4       65410        7        8     0 00:02:38 Established        1

  12. Verify the configuration.

    After completing the configuration, run the display ipv6 routing-table vpn-instance command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ipv6 routing-table vpn-instance vpna
    Routing Table : vpna
             Destinations : 4        Routes : 4         
    
    Destination  : 2001:DB8::                              PrefixLength : 96
    NextHop      : 2001:DB8::1:2                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : GigabitEthernet2/0/0                    Flags        : D
    
    Destination  : 2001:DB8::1:2                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : GigabitEthernet2/0/0                    Flags        : D
    
    Destination  : 2001:DB8::22:2                          PrefixLength : 128
    NextHop      : ::FFFF:3.3.3.9                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::FFFF:10.13.1.2                        TunnelID     : 0x000000002900000006
    Interface    : policy100                               Flags        : RD
    
    Destination  : 2001:DB8::22:2                          PrefixLength : 128
    NextHop      : ::FFFF:3.3.3.9                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::FFFF:10.11.1.2                        TunnelID     : 0x000000002900000006
    Interface    : policy100                               Flags        : RD
                    
    Destination  : FE80::                                  PrefixLength : 10
    NextHop      : ::                                      Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : NULL0                                   Flags        : DB

    Run the display ipv6 routing-table vpn-instance vpna verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ipv6 routing-table vpn-instance vpna 2001:db8::22:2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpna
    Summary Count : 1
    
    Destination  : 2001:DB8::22:2                          PrefixLength : 128
    NextHop      : ::FFFF:3.3.3.9                          Preference   : 255
    Neighbour    : ::3.3.3.9                               ProcessID    : 0
    Label        : 48180                                   Protocol     : IBGP
    State        : Active Adv Relied                       Cost         : 0
    Entry ID     : 0                                       EntryFlags   : 0x00000000
    Reference Cnt: 0                                       Tag          : 0
    Priority     : low                                     Age          : 722sec
    IndirectID   : 0x10000B8                               Instance     : 
    RelayNextHop : ::FFFF:10.13.1.2                        TunnelID     : 0x000000002900000006
    Interface    : policy100                               Flags        : RD
    RelayNextHop : ::FFFF:10.11.1.2                        TunnelID     : 0x000000002900000006
    Interface    : policy100                               Flags        : RD

    The command output shows that the VPN route has successfully recursed to the specified SR-MPLS TE Policy.

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 2001:db8::22:2.

    [~CE1] ping ipv6 -a 2001:db8::11:1 2001:db8::22:2
      PING 2001:DB8::22:2 : 56  data bytes, press CTRL_C to break
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=1 hop limit=62  time = 170 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=2 hop limit=62  time = 140 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=3 hop limit=62  time = 150 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=4 hop limit=62  time = 140 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=5 hop limit=62  time = 170 ms
    
      --- 2001:DB8::22:2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 140/154/170 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    ip vpn-instance vpna
     ipv6-family
      route-distinguisher 100:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000  
      candidate-path preference 200
       segment-list pe1
      candidate-path preference 100
       segment-list pe1backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ipv6 enable
     ipv6 address 2001:DB8::1:2/96
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv6-family vpnv6
      policy vpn-target
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 import
     #              
     ipv6-family vpn-instance vpna
      peer 2001:DB8::1:1 as-number 65410
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    ip vpn-instance vpna
     ipv6-family
      route-distinguisher 200:1
      tnl-policy p1
      vpn-target 111:1 export-extcommunity
      vpn-target 111:1 import-extcommunity
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe2
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list pe2backup
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      binding-sid 115
      mtu 1000     
      candidate-path preference 200
       segment-list pe2
      candidate-path preference 100
       segment-list pe2backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ipv6 enable
     ipv6 address 2001:DB8::2:2/96
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv6-family vpnv6
      policy vpn-target
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color200 import
     #              
     ipv6-family vpn-instance vpna
      peer 2001:DB8::2:1 as-number 65420
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8::1:1/96
    #
    interface LoopBack1
     ipv6 enable
     ipv6 address 2001:DB8::11:1/128
    #
    bgp 65410
     router-id 10.10.10.10
     peer 2001:DB8::1:2 as-number 100
     #
     ipv6-family unicast
      undo synchronization
      network 2001:DB8::11:1 128
      peer 2001:DB8::1:2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8::2:1/96
    #
    interface LoopBack1
     ipv6 enable
     ipv6 address 2001:DB8::22:2/128
    #
    bgp 65420
     router-id 10.20.20.20
     peer 2001:DB8::2:2 as-number 100
     #
     ipv6-family unicast
      undo synchronization
      network 2001:DB8::22:2 128
      peer 2001:DB8::2:2 enable
    #
    return

Example for Configuring Non-labeled Public BGP Routes to Be Recursed to Manually Configured SR-MPLS TE Policies

This section provides an example for configuring non-labeled public BGP routes to be recursed to manually configured SR-MPLS TE Policies to forward public BGP traffic through the SR-MPLS TE Policies.

Networking Requirements

If an Internet user uses a carrier network that performs IP forwarding to access the Internet, core carrier devices on the forwarding path need to learn many Internet routes. This imposes heavy loads on core carrier devices and affects the performance of these devices. To solve this problem, configure the corresponding access device to recurse non-labeled public BGP routes to SR-MPLS TE Policies, so that packets can be forwarded over the SR-MPLS TE Policies.

Figure 1-2685 shows an example for configuring non-labeled public BGP routes to be recursed to manually configured SR-MPLS TE Policies.

Figure 1-2685 Recursion of non-labeled public BGP routes to manually configured SR-MPLS TE Policies

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

When creating a peer, if the IP address of the peer is a loopback interface address or a sub-interface address, you need to run the peer connect-interface command on both ends of the peer relationship to ensure that the two ends are correctly connected.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network for the PEs to communicate.

  2. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.

  4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.

  5. Apply an import or export route-policy to a specified VPNv4 peer on each PE, and set the Color Extended Community. In this example, an import route-policy with the Color Extended Community is applied.

  6. Establish an IBGP peer relationship between the PEs for them to exchange routing information.

  7. Configure a tunnel selection policy on each PE.

  8. Enable the function to recurse non-labeled public BGP routes to SR-MPLS TE Policies on each PE.

  9. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN target and RD of vpna

Procedure

  1. Configure interface IP addresses for each device on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure an SR-MPLS TE Policy on each PE.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list pe2
    [*PE2-segment-routing-segment-list-pe2] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-pe2] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-pe2] quit
    [*PE2-segment-routing] segment-list pe2backup
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 115
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2backup
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 200
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

    After the configuration is complete, run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2019-10-19 15:27:36
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 2                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    Candidate-path Preference: 100
    Path State           : Inactive (Valid)     Path Type            : -
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : -
    GroupId              : 1                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 194                  XcIndex              : -
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure a route-policy.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish a BGP peer relationship between the PEs, apply an import route-policy to a specified BGP peer, and set the color extended community for routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp] commit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] peer 1.1.1.9 route-policy color200 import
    [*PE2-bgp] commit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer command on the PEs to check whether the BGP peer relationship has been established. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1          Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent     OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100        2        6     0     00:00:12   Established   0

  9. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy to be preferentially selected.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] route recursive-lookup tunnel tunnel-policy p1
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] route recursive-lookup tunnel tunnel-policy p1
    [*PE2] commit

  10. Configure each PE to import a local route.

    In this example, loopback addresses 8.8.8.8/32 and 9.9.9.9/32 are used to simulate the local network on PE1 and PE2, respectively.

    # Configure PE1.

    [~PE1] interface loopback 2
    [*PE1-LoopBack1] ip address 8.8.8.8 32
    [*PE1-LoopBack1] quit
    [*PE1] bgp 100
    [*PE1-bgp] network 8.8.8.8 32
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] interface loopback 2
    [*PE2-LoopBack1] ip address 9.9.9.9 32
    [*PE2-LoopBack1] quit
    [*PE2] bgp 100
    [*PE2-bgp] network 9.9.9.9 32
    [*PE2-bgp] quit
    [*PE2] commit

  11. Verify the configuration.

    After completing the configuration, run the display ip routing-table verbose command on each PE to check detailed routing information. The following example uses the command output on PE1. The command output shows that the route advertised by the peer PE has been successfully recursed to the specified SR-MPLS TE Policy.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table 9.9.9.9 32 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
    Summary Count : 1
    
    Destination: 9.9.9.9/32          
         Protocol: IBGP               Process ID: 0              
       Preference: 255                      Cost: 0              
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9
            State: Active Adv Relied         Age: 00h19m06s           
              Tag: 0                    Priority: low            
            Label: NULL                  QoSInfo: 0x0           
       IndirectID: 0x10000C7            Instance:                                 
     RelayNextHop: 0.0.0.0             Interface: policy100
        TunnelID: 0x000000003200000001 Flags: RD

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000   
      candidate-path preference 200
       segment-list pe1
      candidate-path preference 100
       segment-list pe1backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #               
    interface LoopBack2
     ip address 8.8.8.8 255.255.255.255
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      network 8.8.8.8 255.255.255.255
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 import
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    route recursive-lookup tunnel tunnel-policy p1
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe2
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list pe2backup
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      binding-sid 115
      mtu 1000  
      candidate-path preference 200
       segment-list pe2
      candidate-path preference 100
       segment-list pe2backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    interface LoopBack2
     ip address 9.9.9.9 255.255.255.255
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      network 9.9.9.9 255.255.255.255
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color200 import
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    route recursive-lookup tunnel tunnel-policy p1
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return

Example for Configuring BGP4+ 6PE Routes to Be Recursed to Manually Configured SR-MPLS TE Policies

This section provides an example for configuring BGP4+ 6PE routes to be recursed to manually configured SR-MPLS TE Policies to connect discontinuous IPv6 networks through an SR network.

Networking Requirements

If an SR network is deployed between two discontinuous IPv6 networks, you can recurse BGP4+ 6PE routes to SR-MPLS TE Policies for the IPv6 networks to communicate with each other.

On the network shown in Figure 1-2686, there is no direct link between CE1 and CE2 on the IPv6 network. CE1 and CE2 need to communicate through the SR network. To meet this requirement, a 6PE peer relationship can be established between PE1 and PE2. The 6PE peers exchange IPv6 routes learned from their attached CEs using MP-BGP, and forward IPv6 data over SR-MPLS TE Policies.

Figure 1-2686 BGP4+ 6PE route recursion to manually configured SR-MPLS TE Policies

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Precautions

In the BGP4+ 6PE scenario, no VPN instance needs to be configured.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  2. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.

  4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.

  5. Apply an import or export route-policy to a specified peer on each PE, and set the Color Extended Community. In this example, an import route-policy with the Color Extended Community is applied.

  6. Establish an MP-IBGP peer relationship between PEs and enable them to advertise labeled routes.

  7. Configure a tunnel selection policy on each PE.

  8. Establish an EBGP peer relationship between each CE-PE pair for the CE and PE to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

Procedure

  1. Configure interface IP addresses for each device on the backbone network.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP for each device on the backbone network to implement interworking between PEs and Ps. In this example, the IGP is IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions for each device on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Enable SR for each device on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure an SR-MPLS TE Policy on each PE.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list pe2
    [*PE2-segment-routing-segment-list-pe2] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-pe2] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-pe2] quit
    [*PE2-segment-routing] segment-list pe2backup
    [*PE2-segment-routing-segment-list-pe2backup] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-pe2backup] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-pe2backup] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 115
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2backup
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 200
    [*PE2-segment-routing-te-policy-policy200-path] segment-list pe2
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

    After the configuration is complete, run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2019-11-22 16:45:09
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 2                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    Candidate-path Preference: 100
    Path State           : Inactive (Valid)     Path Type            : -
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 100                  Binding SID          : -
    GroupId              : 1                    Policy Name          : policy100
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 194                  XcIndex              : -
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure route-policies.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish an MP-IBGP peer relationship between PEs and enable them to advertise labeled routes. Apply import route-policies to BGP4+ peers and set the color extended community for routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] ipv6-family unicast
    [*PE1-bgp-af-ipv6] peer 3.3.3.9 enable
    [*PE1-bgp-af-ipv6] peer 3.3.3.9 label-route-capability
    [*PE1-bgp-af-ipv6] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp-af-ipv6] commit
    [~PE1-bgp-af-ipv6] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] ipv6-family unicast
    [*PE2-bgp-af-ipv6] peer 1.1.1.9 enable
    [*PE2-bgp-af-ipv6] peer 1.1.1.9 label-route-capability
    [*PE2-bgp-af-ipv6] peer 1.1.1.9 route-policy color200 import
    [*PE2-bgp-af-ipv6] commit
    [~PE2-bgp-af-ipv6] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp ipv6 peer command on the PEs to check whether the BGP peer relationship has been established. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp ipv6 peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.9         4         100        6        7     0 00:01:16 Established        0

  9. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy to be preferentially selected.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] bgp 100
    [*PE1-bgp] ipv6-family unicast
    [*PE1-bgp-af-ipv6] peer 3.3.3.9 tnl-policy p1
    [*PE1-bgp-af-ipv6] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] bgp 100
    [*PE2-bgp] ipv6-family unicast
    [*PE2-bgp-af-ipv6] peer 1.1.1.9 tnl-policy p1
    [*PE2-bgp-af-ipv6] quit
    [*PE2-bgp] quit
    [*PE2] commit

  10. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ipv6 enable
    [*CE1-LoopBack1] ipv6 address 2001:db8::11:1 128
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ipv6 enable
    [*CE1-GigabitEthernet1/0/0] ipv6 address 2001:db8::1:1 96
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] router-id 10.10.10.10
    [*CE1-bgp] peer 2001:db8::1:2 as-number 100
    [*CE1-bgp] ipv6-family unicast
    [*CE1-bgp-af-ipv6] peer 2001:db8::1:2 enable
    [*CE1-bgp-af-ipv6] network 2001:db8::11:1 128
    [*CE1-bgp-af-ipv6] quit
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see "Configuration Files" in this section.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 2001:db8::1:1 as-number 65410
    [*PE1-bgp] ipv6-family unicast
    [*PE1-bgp-af-ipv6] peer 2001:db8::1:1 enable
    [*PE1-bgp-af-ipv6] commit
    [~PE1-bgp-af-ipv6] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see "Configuration Files" in this section.

    After completing the configuration, run the display bgp ipv6 peer command on each PE. The following example uses the peer relationship between PE1 and CE1. The command output shows that a BGP4+ peer relationship has been established between the PE and CE and is in the Established state.

    [~PE1] display bgp ipv6 peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 2                 Peers in established state : 2
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.9         4         100        6        7     0 00:01:16 Established        0
      2001:DB8::1:1   4       65410        5        4     0 00:01:16 Established        1

  11. Verify the configuration.

    After completing the configuration, run the display ipv6 routing-table command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ipv6 routing-table
    Routing Table : _public_
             Destinations : 8        Routes : 8         
    
    Destination  : ::1                                     PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : InLoopBack0                             Flags        : D
    
    Destination  : ::FFFF:127.0.0.0                        PrefixLength : 104
    NextHop      : ::FFFF:127.0.0.1                        Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : InLoopBack0                             Flags        : D
    
    Destination  : ::FFFF:127.0.0.1                        PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : InLoopBack0                             Flags        : D
    
    Destination  : 2001:DB8::                              PrefixLength : 96
    NextHop      : 2001:DB8::1:2                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : GigabitEthernet2/0/0                    Flags        : D
                    
    Destination  : 2001:DB8::1:2                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : GigabitEthernet2/0/0                    Flags        : D
                    
    Destination  : 2001:DB8::11:1                          PrefixLength : 128
    NextHop      : 2001:DB8::1:1                           Preference   : 255
    Cost         : 0                                       Protocol     : EBGP
    RelayNextHop : 2001:DB8::1:1                           TunnelID     : 0x0
    Interface    : GigabitEthernet2/0/0                    Flags        : RD
                    
    Destination  : 2001:DB8::22:2                          PrefixLength : 128
    NextHop      : ::FFFF:3.3.3.9                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8::1:1                           TunnelID     : 0x000000003200000001
    Interface    : policy100                               Flags        : RD
                    
    Destination  : FE80::                                  PrefixLength : 10
    NextHop      : ::                                      Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : NULL0                                   Flags        : DB

    Run the display ipv6 routing-table verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ipv6 routing-table 2001:db8:22::2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : _public_
    Summary Count : 1
    
    Destination  : 2001:DB8::22:2                          PrefixLength : 128
    NextHop      : ::FFFF:3.3.3.9                          Preference   : 255
    Neighbour    : ::3.3.3.9                               ProcessID    : 0
    Label        : 48183                                   Protocol     : IBGP
    State        : Active Adv Relied                       Cost         : 0
    Entry ID     : 0                                       EntryFlags   : 0x00000000
    Reference Cnt: 0                                       Tag          : 0
    Priority     : low                                     Age          : 288sec
    IndirectID   : 0x10000C8                               Instance     : 
    RelayNextHop : 0.0.0.0                                 TunnelID     : 0x000000003200000001
    Interface    : policy100                              Flags        : RD

    The command output shows that the BGP4+ route has been successfully recursed to the specified SR-MPLS TE Policy.

    CEs can ping each other. For example, CE1 can ping CE2 at 2001:db8::22:2.

    [~CE1] ping ipv6 -a 2001:db8::11:1 2001:db8::22:2
      PING 2001:DB8::22:2 : 56  data bytes, press CTRL_C to break
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=1 hop limit=62  time = 170 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=2 hop limit=62  time = 140 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=3 hop limit=62  time = 150 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=4 hop limit=62  time = 140 ms
        Reply from 2001:DB8::22:2
        bytes=56 Sequence=5 hop limit=62  time = 170 ms
    
      --- 2001:DB8::22:2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 140/154/170 ms

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000    
      candidate-path preference 200
       segment-list pe1
      candidate-path preference 100
       segment-list pe1backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ipv6 enable
     ipv6 address 2001:DB8::1:2/96
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     peer 2001:DB8::1:1 as-number 65410
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #              
     ipv6-family unicast
      undo synchronization
      peer 3.3.3.9 route-policy color100 import
      peer 3.3.3.9 label-route-capability
      peer 3.3.3.9 tnl-policy p1
      peer 2001:DB8::1:1 enable
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list pe2
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list pe2backup
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      binding-sid 115
      mtu 1000 
      candidate-path preference 200
       segment-list pe2
      candidate-path preference 100
       segment-list pe2backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ipv6 enable
     ipv6 address 2001:DB8::2:2/96
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     peer 2001:DB8::2:1 as-number 65420
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #              
     ipv6-family unicast
      undo synchronization
      peer 1.1.1.9 route-policy color200 import
      peer 1.1.1.9 label-route-capability
      peer 1.1.1.9 tnl-policy p1
      peer 2001:DB8::2:1 enable
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8::1:1/96
    #
    interface LoopBack1
     ipv6 enable
     ipv6 address 2001:DB8::11:1/128
    #
    bgp 65410
     router-id 10.10.10.10
     peer 2001:DB8::1:2 as-number 100
     #
     ipv6-family unicast
      undo synchronization
      network 2001:DB8::11:1 128
      peer 2001:DB8::1:2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8::2:1/96
    #
    interface LoopBack1
     ipv6 enable
     ipv6 address 2001:DB8::22:2/128
    #
    bgp 65420
     router-id 10.20.20.20
     peer 2001:DB8::2:2 as-number 100
     #
     ipv6-family unicast
      undo synchronization
      network 2001:DB8::22:2 128
      peer 2001:DB8::2:2 enable
    #
    return

Example for Configuring EVPN Route Recursion Based on an SR-MPLS TE Policy

This section provides an example for configuring EVPN route recursion based on an SR-MPLS TE Policy.

Networking Requirements

On the network shown in Figure 1-2687, PE1, P1, P2, and PE2 are in the same AS. IS-IS is required to implement network connectivity. An SR-MPLS TE Policy needs to be configured on PE1 and PE2 to carry EVPN services.

Figure 1-2687 Networking of EVPN route recursion based on an SR-MPLS TE Policy

Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.


Configuration Precautions

During the configuration, note the following:

After a VPN instance is bound to a PE interface connected to a CE, Layer 3 configurations on this interface, such as IP address and routing protocol configurations, are automatically deleted. Add these configurations again if necessary.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign IP addresses to each device interface.

  2. Configure an IGP on each device so that PEs and Ps on the backbone network can communicate with each other. IS-IS is used as an example.

  3. Enable MPLS and SR for each device on the backbone network, and configure static adjacency SIDs.

  4. Configure an SR-MPLS TE Policy on PEs, and configure path information for the SR-MPLS TE Policy.

  5. Configure SBFD and HSB on the PE to enhance the reliability of the SR-MPLS TE Policy.

  6. Establish BGP EVPN peer relationships between PEs, apply an import route-policy, and add the extended community attribute Color to routes.

  7. Configure MP-IBGP on PEs to exchange VPN routing information.

  8. Configure EVPN L3VPN instances on the PEs and bind each interface that connects a PE to a CE to an EVPN L3VPN instance.

  9. Configure a tunnel policy on each PE.

  10. Configure EBGP between CEs and PEs to exchange VPN routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs of PEs and Ps

  • VPN targets and RDs of the EVPN instance named evrf1 and the L3VPN instance named vpn1

Procedure

  1. Configure interface IP addresses on each device.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on each device so that PEs and Ps on the backbone network can communicate with each other. IS-IS is used as an example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on each device.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on each device.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure an SR-MPLS TE Policy on each PE.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list path1
    [*PE1-segment-routing-segment-list-path1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-path1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-path1] quit
    [*PE1-segment-routing] segment-list path2
    [*PE1-segment-routing-segment-list-path2] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-path2] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-path2] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy100-path] segment-list path1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] sr-te policy policy200 endpoint 3.3.3.9 color 200
    [*PE1-segment-routing-te-policy-policy200] binding-sid 215
    [*PE1-segment-routing-te-policy-policy200] mtu 1000
    [*PE1-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE1-segment-routing-te-policy-policy200-path] segment-list path2
    [*PE1-segment-routing-te-policy-policy200-path] quit
    [*PE1-segment-routing-te-policy-policy200] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [~PE2-segment-routing] segment-list path1
    [*PE2-segment-routing-segment-list-path1] index 10 sid label 330000
    [*PE2-segment-routing-segment-list-path1] index 20 sid label 330003
    [*PE2-segment-routing-segment-list-path1] quit
    [*PE2-segment-routing] segment-list path2
    [*PE2-segment-routing-segment-list-path2] index 10 sid label 330001
    [*PE2-segment-routing-segment-list-path2] index 20 sid label 330002
    [*PE2-segment-routing-segment-list-path2] quit
    [*PE2-segment-routing] sr-te policy policy100 endpoint 1.1.1.9 color 100
    [*PE2-segment-routing-te-policy-policy100] binding-sid 115
    [*PE2-segment-routing-te-policy-policy100] mtu 1000
    [*PE2-segment-routing-te-policy-policy100] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy100-path] segment-list path1
    [*PE2-segment-routing-te-policy-policy100-path] quit
    [*PE2-segment-routing-te-policy-policy100] quit
    [*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
    [*PE2-segment-routing-te-policy-policy200] binding-sid 215
    [*PE2-segment-routing-te-policy-policy200] mtu 1000
    [*PE2-segment-routing-te-policy-policy200] candidate-path preference 100
    [*PE2-segment-routing-te-policy-policy200-path] segment-list path2
    [*PE2-segment-routing-te-policy-policy200-path] quit
    [*PE2-segment-routing-te-policy-policy200] quit
    [*PE2-segment-routing] quit
    [*PE2] commit

  6. Configure SBFD and HSB.

    # Configure PE1.

    [~PE1] bfd
    [*PE1-bfd] quit
    [*PE1] sbfd
    [*PE1-sbfd] reflector discriminator 1.1.1.9
    [*PE1-sbfd] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] sr-te-policy seamless-bfd enable
    [*PE1-segment-routing] sr-te-policy backup hot-standby enable
    [*PE1-segment-routing] commit
    [~PE1-segment-routing] quit

    # Configure PE2.

    [~PE2] bfd
    [*PE2-bfd] quit
    [*PE2] sbfd
    [*PE2-sbfd] reflector discriminator 3.3.3.9
    [*PE2-sbfd] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] sr-te-policy seamless-bfd enable
    [*PE2-segment-routing] sr-te-policy backup hot-standby enable
    [*PE2-segment-routing] commit
    [~PE2-segment-routing] quit

  7. Configure tunnel policies on PEs.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] if-match route-type evpn mac
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] route-policy color100 permit node 2
    [*PE1-route-policy] if-match route-type evpn prefix
    [*PE1-route-policy] apply extcommunity color 0:200
    [*PE1-route-policy] quit
    [*PE1] route-policy color100 permit node 3
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color100 permit node 1
    [*PE2-route-policy] if-match route-type evpn mac
    [*PE2-route-policy] apply extcommunity color 0:100
    [*PE2-route-policy] quit
    [*PE2] route-policy color100 permit node 2
    [*PE2-route-policy] if-match route-type evpn prefix
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [~PE2] route-policy color100 permit node 3
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Establish BGP EVPN peer relationships between PEs, apply an import route-policy, and add the extended community attribute Color to routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.9 enable
    [*PE1-bgp-af-evpn] peer 3.3.3.9 route-policy color100 import
    [*PE1-bgp-af-evpn] peer 3.3.3.9 advertise irb
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.9 enable
    [*PE2-bgp-af-evpn] peer 1.1.1.9 route-policy color100 import
    [*PE2-bgp-af-evpn] peer 1.1.1.9 advertise irb
    [*PE2-bgp-af-evpn] quit
    [*PE2-bgp] quit
    [*PE2] commit

    After the configuration is complete, run the display bgp evpn peer command on the PEs to check whether the BGP EVPN peer relationships have been established. If the Established state is displayed in the command output, the BGP EVPN peer relationships have been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.9         4         100        4        4     0 00:00:28 Established        0

  9. Configure EVPN L3VPN instances on PEs and connect sites to the PEs.

    # Configure PE1.

    [~PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv4-family
    [*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
    [*PE1-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
    [*PE1-vpn-instance-vpn1-af-ipv4] quit
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE1-evpn-instance-evrf1] vpn-target 2:2
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] evpn source-address 1.1.1.9
    [*PE1] bridge-domain 10
    [*PE1-bd10] evpn binding vpn-instance evrf1
    [*PE1-bd10] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] esi 0011.1001.1001.1001.1001
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] interface gigabitethernet3/0/0.1 mode l2
    [*PE1-GigabitEthernet3/0/0.1] encapsulation dot1q vid 10
    [*PE1-GigabitEthernet3/0/0.1] rewrite pop single
    [*PE1-GigabitEthernet3/0/0.1] bridge-domain 10
    [*PE1-GigabitEthernet3/0/0.1] quit
    [*PE1] interface Vbdif10
    [*PE1-Vbdif10] ip binding vpn-instance vpn1
    [*PE1-Vbdif10] ip address 10.1.1.1 255.255.255.0
    [*PE1-Vbdif10] arp collect host enable
    [*PE1-Vbdif10] quit
    [*PE1] bgp 100
    [*PE1-bgp] ipv4-family vpn-instance vpn1
    [*PE1-bgp-vpn1] import-route direct
    [*PE1-bgp-vpn1] advertise l2vpn evpn
    [*PE1-bgp-vpn1] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] ipv4-family
    [*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
    [*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
    [*PE2-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
    [*PE2-vpn-instance-vpn1-af-ipv4] quit
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] route-distinguisher 200:1
    [*PE2-evpn-instance-evrf1] vpn-target 2:2
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] evpn source-address 3.3.3.9
    [*PE2] bridge-domain 10
    [*PE2-bd10] evpn binding vpn-instance evrf1
    [*PE2-bd10] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] esi 0011.1001.1001.1001.1002
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet3/0/0.1 mode l2
    [*PE2-GigabitEthernet3/0/0.1] encapsulation dot1q vid 10
    [*PE2-GigabitEthernet3/0/0.1] rewrite pop single
    [*PE2-GigabitEthernet3/0/0.1] bridge-domain 10
    [*PE2-GigabitEthernet3/0/0.1] quit
    [*PE2] interface Vbdif10
    [*PE2-Vbdif10] ip binding vpn-instance vpn1
    [*PE2-Vbdif10] ip address 10.2.1.1 255.255.255.0
    [*PE2-Vbdif10] arp collect host enable
    [*PE2-Vbdif10] quit
    [*PE2] bgp 100
    [*PE2-bgp] ipv4-family vpn-instance vpn1
    [*PE2-bgp-vpn1] import-route direct
    [*PE2-bgp-vpn1] advertise l2vpn evpn
    [*PE2-bgp-vpn1] quit
    [*PE2-bgp] quit
    [*PE2] commit

  10. Configure a tunnel policy on each PE to preferentially select an SR-MPLS TE Policy.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] ipv4-family
    [*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy p1 evpn
    [*PE1-vpn-instance-vpn1-af-ipv4] quit
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] tnl-policy p1
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpn1
    [*PE2-vpn-instance-vpn1] ipv4-family
    [*PE2-vpn-instance-vpn1-af-ipv4] tnl-policy p1 evpn
    [*PE2-vpn-instance-vpn1-af-ipv4] quit
    [*PE2-vpn-instance-vpn1] quit
    [*PE2] evpn vpn-instance evrf1 bd-mode
    [*PE2-evpn-instance-evrf1] tnl-policy p1
    [*PE2-evpn-instance-evrf1] quit
    [*PE2] commit

  11. Verify the configuration.

    After completing the configurations, run the display bgp evpn all routing-table prefix-route command on PE1 to view information about the IP prefix routes sent from PE2.

    [~PE1] display bgp evpn all routing-table prefix-route
     Local AS number : 100
    
     BGP Local router ID is 1.1.1.9
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
     
     EVPN address family:
     Number of Ip Prefix Routes: 2
     Route Distinguisher: 100:1
           Network(EthTagId/IpPrefix/IpPrefixLen)                 NextHop
     *>    0:10.1.1.0:24                                          0.0.0.0
     *>i   0:10.2.1.0:24                                          3.3.3.9

    Run the display bgp evpn all routing-table prefix-route 0:10.2.1.0:24 command on PE1 to view information about the IP prefix routes sent from PE2.

    [~PE1] display bgp evpn all routing-table prefix-route 0:10.2.1.0:24
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total routes of Route Distinguisher(100:1): 1
     BGP routing table entry information of 0:10.2.1.0:24:
     Label information (Received/Applied): 330000/NULL
     From: 3.3.3.9 (3.3.3.9) 
     Route Duration: 0d00h09m04s
     Relay IP Nexthop: 10.11.1.2
     Relay IP Out-Interface:GigabitEthernet1/0/0
     Relay Tunnel Out-Interface: 
     Original nexthop: 3.3.3.9
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, Color <0 : 200>
     AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20
     Route Type: 5 (Ip Prefix Route)
     Ethernet Tag ID: 0, IP Prefix/Len: 10.2.1.0/24, ESI: 0000.0000.0000.0000.0000, GW IP Address: 0.0.0.0
     Not advertised to any peer yet   

    Run the display ip routing-table vpn-instance vpn1 command on PE1 to view information about the VPN routes sent from PE2.

    [~PE1] display ip routing-table vpn-instance vpn1
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpn1
             Destinations : 7        Routes : 7         
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        Vbdif10
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif10
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif10
           10.2.1.0/24  IBGP    255  0             RD  3.3.3.9         policy100 
           10.2.1.1/32  IBGP    255  0             RD  3.3.3.9         policy100 
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

    Run the display bgp evpn all routing-table mac-route command on PE1 to view information about the MAC routes sent from PE2.

    [~PE1] display bgp evpn all routing-table mac-route
     Local AS number : 100
    
     BGP Local router ID is 1.1.1.9
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
     
     EVPN address family:
     Number of Mac Routes: 4
     Route Distinguisher: 200:1
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>i   0:48:00e0-fc49-fa04:0:0.0.0.0                          3.3.3.9
     *>i   0:48:00e0-fc49-fa04:32:10.2.1.1                        3.3.3.9
     *>    0:48:00e0-fc0d-3801:0:0.0.0.0                          0.0.0.0
     *>    0:48:00e0-fc0d-3801:32:10.1.1.1                        0.0.0.0
        
    
     EVPN-Instance evrf1:
     Number of Mac Routes: 4
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>i   0:48:00e0-fc49-fa04:0:0.0.0.0                          3.3.3.9
     *>i   0:48:00e0-fc49-fa04:32:10.2.1.1                        3.3.3.9
     *>    0:48:00e0-fc0d-3801:0:0.0.0.0                          0.0.0.0
     *>    0:48:00e0-fc0d-3801:32:10.1.1.1                        0.0.0.0

    Run the display bgp evpn all routing-table mac-route 0:48:00e0-fc49-fa04:0:0.0.0.0 command on PE1 to check details about the MAC route sent from PE2.

    [~PE1] display bgp evpn all routing-table mac-route 0:48:00e0-fc49-fa04:0:0.0.0.0
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total routes of Route Distinguisher(200:1): 1
     BGP routing table entry information of 0:48:00e0-fc49-fa04:0:0.0.0.0:
     Label information (Received/Applied): 330000/NULL
     From: 3.3.3.9 (3.3.3.9) 
     Route Duration: 0d00h01m49s
     Relay IP Nexthop: 10.11.1.2
     Relay IP Out-Interface:GigabitEthernet1/0/0
     Relay Tunnel Out-Interface:
     Original nexthop: 3.3.3.9
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, RT <2 : 2>, SoO <3.3.3.9 : 0> , Color <0 : 100>, Mac Mobility <flag:1 seq:0 res:0>
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 20
     Route Type: 2 (MAC Advertisement Route)
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc49-fa04/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000
     Not advertised to any peer yet
     
        
    
     EVPN-Instance evrf1:
     Number of Mac Routes: 1
     BGP routing table entry information of 0:48:00e0-fc49-fa04:0:0.0.0.0:
     Route Distinguisher: 200:1
     Remote-Cross route
     Label information (Received/Applied): 330000/NULL
     From: 3.3.3.9 (3.3.3.9) 
     Route Duration: 2d02h23m40s
     Relay Tunnel Out-Interface: policy100(srtepolicy)
     Original nexthop: 3.3.3.9
     Qos information : 0x0
     Ext-Community: RT <1 : 1>, RT <2 : 2>, SoO <3.3.3.9 : 0> , Color <0 : 100>, Mac Mobility <flag:1 seq:0 res:0>
     AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
     Route Type: 2 (MAC Advertisement Route)
     Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc49-fa04/48, IP Address/Len: 0.0.0.0/0, ESI:0000.0000.0000.0000.0000
     Not advertised to any peer yet

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 200:1
     tnl-policy p1
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 100:1
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    bfd
    #
    sbfd
     reflector discriminator 1.1.1.9
    #
    mpls lsr-id 1.1.1.9
    #
    mpls
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list path1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list path2
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000
      candidate-path preference 100
       segment-list path1
     sr-te policy policy200 endpoint 3.3.3.9 color 200
      binding-sid 215
      mtu 1000
      candidate-path preference 100
       segment-list path2
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp collect host enable
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.11.1.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.13.1.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     esi 0011.1001.1001.1001.1001
    #
    interface GigabitEthernet3/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 import
      peer 3.3.3.9 advertise irb
    #
    route-policy color100 permit node 1
     if-match route-type evpn mac
     apply extcommunity color 0:100
    #
    route-policy color100 permit node 2
     if-match route-type evpn prefix
     apply extcommunity color 0:200
    #
    route-policy color100 permit node 3
    #
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    evpn source-address 1.1.1.9
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.11.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.12.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #
    mpls
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.13.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.14.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 200:1
     tnl-policy p1
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 100:1
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    bfd
    #
    sbfd
     reflector discriminator 3.3.3.9
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
    #
    bridge-domain 10
     evpn binding vpn-instance evrf1
    #
    segment-routing
     ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
     sr-te-policy backup hot-standby enable
     sr-te-policy seamless-bfd enable
     segment-list path1
      index 10 sid label 330000
      index 20 sid label 330003
     segment-list path2
      index 10 sid label 330001
      index 20 sid label 330002
     sr-te policy policy100 endpoint 1.1.1.9 color 100
      binding-sid 115
      mtu 1000
      candidate-path preference 100
       segment-list path1
     sr-te policy policy200 endpoint 1.1.1.9 color 200
      binding-sid 215
      mtu 1000
      candidate-path preference 100
       segment-list path2
    #
    isis 1
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp collect host enable
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.12.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.14.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet3/0/0
     undo shutdown
     esi 0011.1001.1001.1001.1002
    #
    interface GigabitEthernet3/0/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color100 import
      peer 1.1.1.9 advertise irb
    #
    route-policy color100 permit node 1
     if-match route-type evpn mac
     apply extcommunity color 0:100
    #
    route-policy color100 permit node 2
     if-match route-type evpn prefix
     apply extcommunity color 0:200
    #
    route-policy color100 permit node 3
    #
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    evpn source-address 3.3.3.9
    #
    return

Example for Redirecting Public IPv4 BGP FlowSpec Routes to SR-MPLS TE Policies

This section provides an example for redirecting public IPv4 BGP FlowSpec routes to SR-MPLS TE Policies to meet the steering requirements of different services.

Networking Requirements

In traditional BGP FlowSpec-based traffic optimization, traffic transmitted over paths with the same source and destination nodes can be redirected to only one path, which does not achieve accurate traffic steering. With the function to redirect a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, a device can redirect traffic transmitted over paths with the same source and destination nodes to different SR-MPLS TE Policies.

On the network shown in Figure 1-2688, there are two SR-MPLS TE Policies in the direction from PE1 to PE2. PE2 is connected to two local networks (10.8.8.8/32 and 10.9.9.9/32). It is required that traffic destined for different networks be redirected to different SR-MPLS TE Policies.

Figure 1-2688 Networking for redirecting public IPv4 BGP FlowSpec routes to SR-MPLS TE Policies

Interfaces 1 and 2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.


Precautions

None

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IS-IS on the backbone network for the PEs to communicate.

  2. Enable MPLS on the backbone network and configure SR and static adjacency labels.

  3. Configure two SR-MPLS TE Policies from PE1 to PE2.

  4. Configure static FlowSpec routes on PE1 and redirect the routes to the SR-MPLS TE Policies.
  5. Configure a tunnel policy.
  6. Establish a BGP FlowSpec peer relationship between the PEs.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • Adjacency labels on the PEs and Ps

Procedure

  1. Configure interface IP addresses.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip address 10.3.1.1 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip address 10.4.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.4.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  2. Configure an IGP on the backbone network for the PEs and Ps to communicate. The following example uses IS-IS.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] isis enable 1
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] isis enable 1
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330000
    [*PE1-segment-routing] ipv4 adjacency local-ip-addr 10.3.1.1 remote-ip-addr 10.3.1.2 sid 330001
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330003
    [*P1-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330002
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330000
    [*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.4.1.2 remote-ip-addr 10.4.1.1 sid 330001
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing mpls
    [*PE2-isis-1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.3.1.2 remote-ip-addr 10.3.1.1 sid 330002
    [*P2-segment-routing] ipv4 adjacency local-ip-addr 10.4.1.1 remote-ip-addr 10.4.1.2 sid 330003
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] quit
    [*P2] commit

  5. Configure SR-MPLS TE Policies.

    # Configure PE1.

    [~PE1] segment-routing
    [~PE1-segment-routing] segment-list pe1
    [*PE1-segment-routing-segment-list-pe1] index 10 sid label 330000
    [*PE1-segment-routing-segment-list-pe1] index 20 sid label 330002
    [*PE1-segment-routing-segment-list-pe1] quit
    [*PE1-segment-routing] segment-list pe1backup
    [*PE1-segment-routing-segment-list-pe1backup] index 10 sid label 330001
    [*PE1-segment-routing-segment-list-pe1backup] index 20 sid label 330003
    [*PE1-segment-routing-segment-list-pe1backup] quit
    [*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
    [*PE1-segment-routing-te-policy-policy100] binding-sid 115
    [*PE1-segment-routing-te-policy-policy100] mtu 1000
    [*PE1-segment-routing-te-policy-policy100] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy100-path] segment-list pe1
    [*PE1-segment-routing-te-policy-policy100-path] quit
    [*PE1-segment-routing-te-policy-policy100] quit
    [*PE1-segment-routing] sr-te policy policy200 endpoint 3.3.3.9 color 200
    [*PE1-segment-routing-te-policy-policy200] binding-sid 125
    [*PE1-segment-routing-te-policy-policy200] mtu 1000
    [*PE1-segment-routing-te-policy-policy200] candidate-path preference 200
    [*PE1-segment-routing-te-policy-policy200-path] segment-list pe1backup
    [*PE1-segment-routing-te-policy-policy200-path] quit
    [*PE1-segment-routing-te-policy-policy200] quit
    [*PE1-segment-routing] quit
    [*PE1] commit

    After the configuration is complete, you can run the display sr-te policy command to check SR-MPLS TE Policy information.

    [~PE1] display sr-te policy
    PolicyName : policy100
    Endpoint             : 3.3.3.9              Color                : 100
    TunnelId             : 8193                 TunnelType           : SR-TE Policy
    Binding SID          : 115                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2020-05-16 10:18:16
    Admin State          : UP                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 1                   
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : 115
    GroupId              : 8193                 Policy Name          : policy100
    Template ID          : -   
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 32771                XcIndex              : 2032771
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002
    
    PolicyName : policy200
    Endpoint             : 3.3.3.9              Color                : 200
    TunnelId             : 8194                 TunnelType           : SR-TE Policy
    Binding SID          : 125                  MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2020-05-16 10:20:32
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 1                   
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : Configuration(30)    Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : 125
    GroupId              : 8194                 Policy Name          : policy200
    Template ID          : -    
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1backup
      Segment-List ID    : 32833                XcIndex              : 2032833
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330001, 330003

  6. Establish a BGP peer relationship between the PEs and import local network routes to PE2.

    In this example, loopback addresses 10.8.8.8/32 and 10.9.9.9/32 are used to simulate two local networks on PE2.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] commit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] interface loopback 2
    [*PE2-LoopBack2] ip address 10.8.8.8 32
    [*PE2-LoopBack2] quit
    [*PE2] interface loopback 3
    [*PE2-LoopBack3] ip address 10.9.9.9 32
    [*PE2-LoopBack3] quit
    [*PE2] bgp 100
    [*PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] network 10.8.8.8 32
    [*PE2-bgp] network 10.9.9.9 32
    [*PE2-bgp] commit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp peer command on the PEs and check whether a BGP peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp peer
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.9         4         100      201      202     0 02:51:56 Established        2

  7. Establish a BGP FlowSpec peer relationship between the PEs.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family flow
    [*PE1-bgp-af-ipv4-flow] peer 3.3.3.9 enable
    [*PE1-bgp-af-ipv4-flow] commit
    [~PE1-bgp-af-ipv4-flow] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] ipv4-family flow
    [*PE2-bgp-af-ipv4-flow] peer 1.1.1.9 enable
    [*PE2-bgp-af-ipv4-flow] commit
    [~PE2-bgp-af-ipv4-flow] quit
    [~PE2-bgp] quit

    After the configuration is complete, run the display bgp flow peer command on the PEs and check whether a BGP FlowSpec peer relationship has been established between the PEs. If the Established state is displayed in the command output, the BGP FlowSpec peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp flow peer 
    
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      3.3.3.9         4         100      208      209     0 02:58:16 Established        0

  8. Configure static BGP FlowSpec route redirection on PE1.

    BGP FlowSpec route redirection is based on <Redirection IP address, Color>. If the redirection IP address and color attributes of a BGP FlowSpec route match the endpoint and color attributes of an SR-MPLS TE Policy, the route can be successfully redirected to the SR-MPLS TE Policy.

    In this example, the traffic destined for 10.8.8.8/32 needs to be redirected to the SR-MPLS TE Policy named policy100, and the traffic destined for 10.9.9.9/32 needs to be redirected to the SR-MPLS TE Policy named policy200.

    # Configure PE1.

    [~PE1] flow-route PE1toPE2
    [*PE1-flow-route] if-match destination 10.8.8.8 255.255.255.255
    [*PE1-flow-route] apply redirect ip 3.3.3.9:0 color 0:100
    [*PE1-flow-route] quit               
    [*PE1] flow-route PE1toPE2b
    [*PE1-flow-route] if-match destination 10.9.9.9 255.255.255.255
    [*PE1-flow-route] apply redirect ip 3.3.3.9:0 color 0:200
    [*PE1-flow-route] commit
    [~PE1-flow-route] quit

  9. Configure a tunnel policy on each PE to preferentially select an SR-MPLS TE Policy.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] tunnel-selector p1 permit node 1
    [*PE1-tunnel-selector] apply tunnel-policy p1
    [*PE1-tunnel-selector] quit
    [*PE1] bgp 100
    [*PE1-bgp] ipv4-family flow
    [*PE1-bgp-af-ipv4-flow] redirect ip recursive-lookup tunnel tunnel-selector p1
    [*PE1-bgp-af-ipv4-flow] commit
    [~PE1-bgp-af-ipv4-flow] quit
    [~PE1-bgp] quit

  10. Verify the configuration.

    # Display BGP FlowSpec route information on PE1.

    [~PE1] display bgp flow routing-table
     BGP Local router ID is 1.1.1.9
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
     RPKI validation codes: V - valid, I - invalid, N - not-found
    
    
     Total Number of Routes: 2
     * >  ReIndex : 24577
          Dissemination Rules:
           Destination IP : 10.8.8.8/32
           MED      : 0                   PrefVal  : 0                   
           LocalPref:                           
           Path/Ogn :  i
     * >  ReIndex : 24578
          Dissemination Rules:
           Destination IP : 10.9.9.9/32
           MED      : 0                   PrefVal  : 0                   
           LocalPref:                           
           Path/Ogn :  i

    # Display the traffic redirection information carried in a single BGP FlowSpec route based on the ReIndex value shown in the preceding command output.

    [~PE1] display bgp flow routing-table 24577
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     ReIndex : 24577
     Dissemination Rules :
       Destination IP : 10.8.8.8/32
    
     BGP flow-ipv4 routing table entry information of 24577:
     Local : PE1toPE2 
     Match action :
       apply redirect ip 3.3.3.9:0 color 0:100
     Route Duration: 0d03h10m19s
     AS-path Nil, origin igp, MED 0, pref-val 0, valid, local, best, pre 255
     Advertised to such 1 peers:
        3.3.3.9
    [~PE1] display bgp flow routing-table 24578
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     ReIndex : 24578
     Dissemination Rules :
       Destination IP : 10.9.9.9/32
    
     BGP flow-ipv4 routing table entry information of 24578:
     Local : PE1toPE2b 
     Match action :
       apply redirect ip 3.3.3.9:0 color 0:200
     Route Duration: 0d03h11m39s
     AS-path Nil, origin igp, MED 0, pref-val 0, valid, local, best, pre 255
     Advertised to such 1 peers:
        3.3.3.9

    # Inject test traffic to PE1 and enable SR-MPLS TE Policy traffic statistics collection. Then, run the display sr-te policy traffic-statistics [ endpoint ipv4-address color color-value | policy-name name-value | binding-sid binding-sid ] command to check SR-MPLS TE Policy traffic statistics.

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330000
     ipv4 adjacency local-ip-addr 10.3.1.1 remote-ip-addr 10.3.1.2 sid 330001
     segment-list pe1
      index 10 sid label 330000
      index 20 sid label 330002
     segment-list pe1backup
      index 10 sid label 330001
      index 20 sid label 330003
     sr-te policy policy100 endpoint 3.3.3.9 color 100
      binding-sid 115
      mtu 1000      
      candidate-path preference 200
       segment-list pe1
     sr-te policy policy200 endpoint 3.3.3.9 color 200
      binding-sid 125
      mtu 1000      
      candidate-path preference 200
       segment-list pe1backup
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.1 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.3.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
     #
     ipv4-family flow
      peer 3.3.3.9 enable
      redirect ip recursive-lookup tunnel tunnel-selector p1
    #
    flow-route PE1toPE2
     if-match destination 10.8.8.8 255.255.255.255
     apply redirect ip 3.3.3.9:0 color 0:100
    #               
    flow-route PE1toPE2b
     if-match destination 10.9.9.9 255.255.255.255
     apply redirect ip 3.3.3.9:0 color 0:200
    #
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    tunnel-selector p1 permit node 1
     apply tunnel-policy p1
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330002
     ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.1.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.2.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330000
     ipv4 adjacency local-ip-addr 10.4.1.2 remote-ip-addr 10.4.1.1 sid 330001
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.2.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.2 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
    #               
    interface LoopBack2
     ip address 10.8.8.8 255.255.255.255
    #               
    interface LoopBack3
     ip address 10.9.9.9 255.255.255.255
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      network 10.8.8.8 255.255.255.255
      network 10.9.9.9 255.255.255.255
      peer 1.1.1.9 enable
     #
     ipv4-family flow
      peer 1.1.1.9 enable
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
     ipv4 adjacency local-ip-addr 10.3.1.2 remote-ip-addr 10.3.1.1 sid 330002
     ipv4 adjacency local-ip-addr 10.4.1.1 remote-ip-addr 10.4.1.2 sid 330003
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1
     segment-routing mpls
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.3.1.2 255.255.255.0
     isis enable 1  
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.4.1.1 255.255.255.0
     isis enable 1  
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
    #
    return

Example for Creating an SR-MPLS TE Policy in ODN Mode

This section provides an example for creating an SR-MPLS TE Policy in ODN mode to better meet service requirements.

Networking Requirements

On the network shown in Figure 1-2689:
  • CE1 and CE2 belong to a VPN instance named vpna.

  • The VPN target used by vpna is 111:1.

Configure EVPN L3VPN services to recurse to SR-MPLS TE Policies to ensure secure communication between users in the same VPN. Because multiple links exist between PEs on the public network, other links must be able to provide protection for the primary link.

In traditional scenarios where traffic is steered into SR-MPLS TE Policies based on colors, the SR-MPLS TE Policies must be configured in advance. This may fail to meet diversified service requirements. The on-demand next hop (ODN) function does not require a large number of SR-MPLS TE Policies to be configured in advance. Instead, it enables SR-MPLS TE Policy creation to be dynamically triggered on demand based on service routes, simplifying network operations. During SR-MPLS TE Policy creation, you can select a pre-configured attribute template and constraint template to ensure that the to-be-created SR-MPLS TE Policy meets service requirements.

Figure 1-2689 Creating an SR-MPLS TE Policy in ODN mode

Interfaces 1 through 4 in this example represent GE1/0/0, GE2/0/0, GE3/0/0, and GE4/0/0, respectively.


Configuration Notes

During the configuration, note the following:

After a PE interface connected to a CE is bound to a VPN instance, Layer 3 configurations on this interface are automatically deleted. Such configurations include IP address and routing protocol configurations, and must be added again if needed.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for interfaces on the backbone network.

  2. Configure IS-IS on the backbone network to ensure that PEs can interwork with each other.

  3. Enable MPLS on the backbone network.

  4. Configure SR on the backbone network. In addition, configure IS-IS SR and use IS-IS to dynamically generate adjacency SIDs and advertise the SIDs to neighbors.

  5. Configure TE attributes. The TE metric is used in this example. You can also configure other attributes, such as affinity attributes, path constraints, bandwidth constraints, bandwidth usage, IGP metric, delay, and hop limit, as required.
  6. Establish a BGP-LS address family peer relationship between PE1 and the controller and another one between PE2 and the controller. In this way, the PEs can report network topology and label information to the controller through BGP-LS.

  7. Configure route-policies with the color extended community attribute specified for routes on PE1 and PE2. Export route-policies are used in this example.

  8. Configure an ODN template.
  9. Establish PCEP connections between the PEs and controller. With these connections, the PEs functioning as PCCs send path computation requests to the controller functioning as the PCE. After completing path computation, the controller delivers the computation result to PE1 through PCEP.
  10. Configure a source address on each PE.
  11. Establish a BGP EVPN peer relationship between the PEs for them to exchange routing information.

  12. Configure an EVPN L3VPN instance on each PE and bind an access-side interface to the instance.

  13. Configure a tunnel selection policy on each PE.

  14. Establish EBGP peer relationships between the CEs and PEs for them to exchange routing information.

Data Preparation

To complete the configuration, you need the following data:

  • MPLS LSR IDs on the PEs and Ps

  • VPN target and RD of vpna

  • SRGB ranges on the PEs and Ps

Procedure

  1. Configure IP addresses for interfaces.

    # Configure PE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE1
    [*HUAWEI] commit
    [~PE1] interface loopback 1
    [*PE1-LoopBack1] ip address 1.1.1.9 32
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] interface gigabitethernet4/0/0
    [*PE1-GigabitEthernet4/0/0] ip address 10.3.1.1 24
    [*PE1-GigabitEthernet4/0/0] quit
    [*PE1] commit

    # Configure P1.

    <HUAWEI> system-view
    [~HUAWEI] sysname P1
    [*HUAWEI] commit
    [~P1] interface loopback 1
    [*P1-LoopBack1] ip address 2.2.2.9 32
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] ip address 10.23.1.1 24
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    <HUAWEI> system-view
    [~HUAWEI] sysname PE2
    [*HUAWEI] commit
    [~PE2] interface loopback 1
    [*PE2-LoopBack1] ip address 3.3.3.9 32
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet4/0/0
    [*PE2-GigabitEthernet4/0/0] ip address 10.4.1.1 24
    [*PE2-GigabitEthernet4/0/0] quit
    [*PE2] commit

    # Configure P2.

    <HUAWEI> system-view
    [~HUAWEI] sysname P2
    [*HUAWEI] commit
    [~P2] interface loopback 1
    [*P2-LoopBack1] ip address 4.4.4.9 32
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] interface gigabitethernet3/0/0
    [*P2-GigabitEthernet3/0/0] ip address 10.23.1.2 24
    [*P2-GigabitEthernet3/0/0] quit
    [*P2] commit

    # Configure the Controller.

    <HUAWEI> system-view
    [~HUAWEI] sysname Controller
    [*HUAWEI] commit
    [~Controller] interface loopback 1
    [*Controller-LoopBack1] ip address 10.10.10.10 32
    [*Controller-LoopBack1] quit
    [~Controller] interface gigabitethernet1/0/0
    [~Controller-GigabitEthernet1/0/0] ip address 10.3.1.2 24
    [*Controller-GigabitEthernet1/0/0] quit
    [*Controller] interface gigabitethernet2/0/0
    [*Controller-GigabitEthernet2/0/0] ip address 10.4.1.2 24
    [*Controller-GigabitEthernet2/0/0] quit
    [*Controller] commit

  2. Configure IGP on the backbone network for the PEs and Ps to communicate. IS-IS is used in this example.

    # Configure PE1.

    [~PE1] isis 1
    [*PE1-isis-1] is-level level-1
    [*PE1-isis-1] network-entity 10.0000.0000.0001.00
    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis enable 1
    [*PE1-LoopBack1] quit
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] isis enable 1
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] isis enable 1
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] isis 1
    [*P1-isis-1] is-level level-1
    [*P1-isis-1] network-entity 10.0000.0000.0002.00
    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis enable 1
    [*P1-LoopBack1] quit
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] isis enable 1
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] isis enable 1
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] isis enable 1
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] isis 1
    [*PE2-isis-1] is-level level-1
    [*PE2-isis-1] network-entity 10.0000.0000.0003.00
    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis enable 1
    [*PE2-LoopBack1] quit
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] isis enable 1
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] isis enable 1
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] isis 1
    [*P2-isis-1] is-level level-1
    [*P2-isis-1] network-entity 10.0000.0000.0004.00
    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis enable 1
    [*P2-LoopBack1] quit
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] isis enable 1
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] isis enable 1
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] interface gigabitethernet3/0/0
    [*P2-GigabitEthernet3/0/0] isis enable 1
    [*P2-GigabitEthernet3/0/0] quit
    [*P2] commit

  3. Configure basic MPLS functions on the backbone network.

    # Configure PE1.

    [~PE1] mpls lsr-id 1.1.1.9
    [*PE1] mpls
    [*PE1-mpls] commit
    [~PE1-mpls] quit

    # Configure P1.

    [~P1] mpls lsr-id 2.2.2.9
    [*P1] mpls
    [*P1-mpls] commit
    [~P1-mpls] quit

    # Configure PE2.

    [~PE2] mpls lsr-id 3.3.3.9
    [*PE2] mpls
    [*PE2-mpls] commit
    [~PE2-mpls] quit

    # Configure P2.

    [~P2] mpls lsr-id 4.4.4.9
    [*P2] mpls
    [*P2-mpls] commit
    [~P2-mpls] quit

  4. Configure SR on the backbone network.

    # Configure PE1.

    [~PE1] segment-routing
    [*PE1-segment-routing] quit
    [*PE1] isis 1
    [*PE1-isis-1] cost-style wide
    [*PE1-isis-1] traffic-eng level-1
    [*PE1-isis-1] segment-routing mpls
    [*PE1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE1-isis-1] quit
    [*PE1] interface loopback 1
    [*PE1-LoopBack1] isis prefix-sid index 10
    [*PE1-LoopBack1] quit
    [*PE1] commit

    # Configure P1.

    [~P1] segment-routing
    [*P1-segment-routing] quit
    [*P1] isis 1
    [*P1-isis-1] cost-style wide
    [*P1-isis-1] traffic-eng level-1
    [*P1-isis-1] segment-routing mpls
    [*P1-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P1-isis-1] quit
    [*P1] interface loopback 1
    [*P1-LoopBack1] isis prefix-sid index 20
    [*P1-LoopBack1] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] segment-routing
    [*PE2-segment-routing] quit
    [*PE2] isis 1
    [*PE2-isis-1] cost-style wide
    [*PE2-isis-1] segment-routing mpls
    [PE2-isis-1] traffic-eng level-1
    [*PE2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*PE2-isis-1] quit
    [*PE2] interface loopback 1
    [*PE2-LoopBack1] isis prefix-sid index 30
    [*PE2-LoopBack1] quit
    [*PE2] commit

    # Configure P2.

    [~P2] segment-routing
    [*P2-segment-routing] quit
    [*P2] isis 1
    [*P2-isis-1] cost-style wide
    [*P2-isis-1] traffic-eng level-1
    [*P2-isis-1] segment-routing mpls
    [*P2-isis-1] segment-routing global-block 16000 23999

    The SRGB range varies according to the device. The range specified in this example is for reference only.

    [*P2-isis-1] quit
    [*P2] interface loopback 1
    [*P2-LoopBack1] isis prefix-sid index 40
    [*P2-LoopBack1] quit
    [*P2] commit

  5. Configure the TE metric attribute.

    # Configure PE1.

    [~PE1] te attribute enable
    [*PE1] interface gigabitethernet1/0/0
    [*PE1-GigabitEthernet1/0/0] te metric 20
    [*PE1-GigabitEthernet1/0/0] quit
    [*PE1] interface gigabitethernet3/0/0
    [*PE1-GigabitEthernet3/0/0] te metric 10
    [*PE1-GigabitEthernet3/0/0] quit
    [*PE1] commit

    # Configure P1.

    [~P1] te attribute enable
    [*P1] interface gigabitethernet1/0/0
    [*P1-GigabitEthernet1/0/0] te metric 10
    [*P1-GigabitEthernet1/0/0] quit
    [*P1] interface gigabitethernet2/0/0
    [*P1-GigabitEthernet2/0/0] te metric 10
    [*P1-GigabitEthernet2/0/0] quit
    [*P1] interface gigabitethernet3/0/0
    [*P1-GigabitEthernet3/0/0] te metric 100
    [*P1-GigabitEthernet3/0/0] quit
    [*P1] commit

    # Configure PE2.

    [~PE2] te attribute enable
    [*PE2] interface gigabitethernet3/0/0
    [*PE2-GigabitEthernet3/0/0] te metric 20
    [*PE2-GigabitEthernet3/0/0] quit
    [*PE2] interface gigabitethernet1/0/0
    [*PE2-GigabitEthernet1/0/0] te metric 10
    [*PE2-GigabitEthernet1/0/0] quit
    [*PE2] commit

    # Configure P2.

    [~P2] te attribute enable
    [*P2] interface gigabitethernet1/0/0
    [*P2-GigabitEthernet1/0/0] te metric 20
    [*P2-GigabitEthernet1/0/0] quit
    [*P2] interface gigabitethernet2/0/0
    [*P2-GigabitEthernet2/0/0] te metric 20
    [*P2-GigabitEthernet2/0/0] quit
    [*P2] interface gigabitethernet3/0/0
    [*P2-GigabitEthernet3/0/0] te metric 100
    [*P2-GigabitEthernet3/0/0] quit
    [*P2] commit

  6. Establish BGP-LS address family peer relationships.

    To allow a PE to report topology information to a controller through BGP-LS, you must enable IS-IS-based topology advertisement on the PE. Typically, this configuration needs to be performed on only one device in an IGP domain. In this example, this configuration is performed on both PE1 and PE2 to improve reliability.

    # Enable IS-IS SR-MPLS TE on PE1.

    [~PE1] isis 1
    [~PE1-isis-1] bgp-ls enable level-1
    [*PE1-isis-1] commit
    [~PE1-isis-1] quit

    # Enable IS-IS SR-MPLS TE on PE2.

    [~PE2] isis 1
    [~PE2-isis-1] bgp-ls enable level-1
    [*PE2-isis-1] commit
    [~PE2-isis-1] quit

    # Configure the BGP-LS address family peer relationship on PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 10.3.1.2 as-number 100
    [*PE1-bgp] link-state-family unicast
    [*PE1-bgp-af-ls] peer 10.3.1.2 enable
    [*PE1-bgp-af-ls] quit
    [*PE1-bgp] quit
    [*PE1] commit

    # Configure the BGP-LS address family peer relationship on PE2.

    [~PE2] bgp 100
    [*PE2-bgp] peer 10.4.1.2 as-number 100
    [*PE2-bgp] link-state-family unicast
    [*PE2-bgp-af-ls] peer 10.4.1.2 enable
    [*PE2-bgp-af-ls] quit
    [*PE2-bgp] quit
    [*PE2] commit

    # Configure the BGP-LS address family peer relationship on the controller.

    [~Controller] bgp 100
    [*Controller-bgp] peer 10.3.1.1 as-number 100
    [*Controller-bgp] peer 10.4.1.1 as-number 100
    [*Controller-bgp] link-state-family unicast
    [*Controller-bgp-af-ls] peer 10.3.1.1 enable
    [*Controller-bgp-af-ls] peer 10.4.1.1 enable
    [*Controller-bgp-af-ls] quit
    [*Controller-bgp] quit
    [*Controller] commit

  7. Configure route-policies.

    # Configure PE1.

    [~PE1] route-policy color100 permit node 1
    [*PE1-route-policy] apply extcommunity color 0:100
    [*PE1-route-policy] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] route-policy color200 permit node 1
    [*PE2-route-policy] apply extcommunity color 0:200
    [*PE2-route-policy] quit
    [*PE2] commit

  8. Configure a source address on each PE.

    # Configure PE1.

    [~PE1] evpn source-address 1.1.1.9
    [*PE1] commit

    # Configure PE2.

    [~PE2] evpn source-address 3.3.3.9
    [*PE2] commit

  9. Establish a BGP EVPN peer relationship between the PEs, apply export route-policies to the BGP EVPN peers, and set the color extended community for routes.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] peer 3.3.3.9 as-number 100
    [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 3.3.3.9 enable
    [*PE1-bgp-af-evpn] peer 3.3.3.9 route-policy color100 export
    [*PE1-bgp-af-evpn] peer 3.3.3.9 advertise irb
    [*PE1-bgp-af-evpn] commit
    [~PE1-bgp-af-evpn] quit
    [~PE1-bgp] quit

    # Configure PE2.

    [~PE2] bgp 100
    [~PE2-bgp] peer 1.1.1.9 as-number 100
    [*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
    [*PE2-bgp] l2vpn-family evpn
    [*PE2-bgp-af-evpn] peer 1.1.1.9 enable
    [*PE2-bgp-af-evpn] peer 1.1.1.9 route-policy color200 export
    [*PE2-bgp-af-evpn] peer 1.1.1.9 advertise irb
    [*PE2-bgp-af-evpn] commit
    [~PE2-bgp-af-evpn] quit
    [~PE2-bgp] quit

    After completing the configuration, run the display bgp evpn peer command on the PEs to check whether the BGP peer relationship has been established. If the Established state is displayed in the command output, the BGP peer relationship has been established successfully. The following example uses the command output on PE1.

    [~PE1] display bgp evpn peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
     Total number of peers : 1                 Peers in established state : 1
      Peer            V    AS  MsgRcvd  MsgSent    OutQ  Up/Down    State        PrefRcv
      3.3.3.9         4   100   12      18         0     00:09:38   Established   0

  10. Configure an ODN template.

    # Configure PE1.

    [~PE1] segment-routing policy constraint-template tp200
    [*PE1-sr-policy-constraint-template-tp200] metric-type te
    [*PE1-sr-policy-constraint-template-tp200] max-cumulation te 20
    [*PE1-sr-policy-constraint-template-tp200] quit
    [*PE1] segment-routing
    [*PE1-segment-routing] on-demand color 200
    [*PE1-segment-routing-odn-200] dynamic-computation-seq pcep
    [*PE1-segment-routing-odn-200] constraint-template tp200
    [*PE1-segment-routing-odn-200] candidate-path preference 100
    [*PE1-segment-routing-odn-200] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] segment-routing policy constraint-template tp100
    [*PE2-sr-policy-constraint-template-tp100] metric-type te
    [*PE2-sr-policy-constraint-template-tp100] max-cumulation te 20
    [*PE2-sr-policy-constraint-template-tp100] quit
    [*PE2] segment-routing
    [*PE2-segment-routing] on-demand color 100
    [*PE2-segment-routing-odn-200] dynamic-computation-seq pcep
    [*PE2-segment-routing-odn-100] constraint-template tp100
    [*PE2-segment-routing-odn-100] candidate-path preference 100
    [*PE2-segment-routing-odn-100] quit
    [*PE2] commit

  11. Configure PCEP delegation.

    # Configure PE1.

    [~PE1] pce-client
    [*PE1-pce-client] capability segment-routing
    [*PE1-pce-client] connect-server 10.10.10.10
    [*PE1-pce-client] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] pce-client
    [*PE2-pce-client] capability segment-routing
    [*PE2-pce-client] connect-server 10.10.10.10
    [*PE2-pce-client] quit
    [*PE2] commit

    After the configuration is complete, the PEs can receive the SR-MPLS TE Policy paths delivered by the controller and then generate SR-MPLS TE Policies. You can run the display sr-te policy command to check SR-MPLS TE Policy information. The following example uses the command output on PE1.

    [~PE1] display sr-te policy
    PolicyName : policy200
    Endpoint             : 3.3.3.9              Color                : 200
    TunnelId             : 1                    TunnelType           : SR-TE Policy
    Binding SID          : -                    MTU                  : 1000
    Policy State         : Up                   State Change Time    : 2023-02-18 15:37:15
    Admin State          : Up                   Traffic Statistics   : Disable
    BFD                  : Disable              Backup Hot-Standby   : Disable
    DiffServ-Mode        : -
    Active IGP Metric    : -
    Candidate-path Count : 1                    
    
    Candidate-path Preference: 200
    Path State           : Active               Path Type            : Primary
    Protocol-Origin      : PCEP(10)             Originator           : 0, 0.0.0.0
    Discriminator        : 200                  Binding SID          : -
    GroupId              : 2                    Policy Name          : policy200
    Template ID          : -
    Active IGP Metric    : -                              ODN Color            : -
    Metric               :
     IGP Metric          : -                              TE Metric            : -
     Delay Metric        : -                              Hop Counts           : -
    Segment-List Count   : 1
     Segment-List        : pe1
      Segment-List ID    : 129                  XcIndex              : 68
      List State         : Up                   BFD State            : -
      EXP                : -                    TTL                  : -
      DeleteTimerRemain  : -                    Weight               : 1
      Label : 330000, 330002

  12. Create a VPN instance and enable the IPv4 address family on each PE. Then, bind each PE's interface connected to a CE to the corresponding VPN instance.

    # Configure PE1.

    [~PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
    [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both evpn
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] interface gigabitethernet2/0/0
    [*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
    [*PE1-GigabitEthernet2/0/0] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
    [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both evpn
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] interface gigabitethernet2/0/0
    [*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
    [*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
    [*PE2-GigabitEthernet2/0/0] quit
    [*PE2] commit

  13. Configure a tunnel selection policy on each PE to preferentially select an SR-MPLS TE Policy.

    # Configure PE1.

    [~PE1] tunnel-policy p1
    [*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE1-tunnel-policy-p1] quit
    [*PE1] ip vpn-instance vpna
    [*PE1-vpn-instance-vpna] evpn mpls routing-enable
    [*PE1-vpn-instance-vpna] ipv4-family
    [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1 evpn
    [*PE1-vpn-instance-vpna-af-ipv4] quit
    [*PE1-vpn-instance-vpna] quit
    [*PE1] commit

    # Configure PE2.

    [~PE2] tunnel-policy p1
    [*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
    [*PE2-tunnel-policy-p1] quit
    [*PE2] ip vpn-instance vpna
    [*PE2-vpn-instance-vpna] evpn mpls routing-enable
    [*PE2-vpn-instance-vpna] ipv4-family
    [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1 evpn
    [*PE2-vpn-instance-vpna-af-ipv4] quit
    [*PE2-vpn-instance-vpna] quit
    [*PE2] commit

  14. Establish an EBGP peer relationship between each CE-PE pair.

    # Configure CE1.

    <HUAWEI> system-view
    [~HUAWEI] sysname CE1
    [*HUAWEI] commit
    [~CE1] interface loopback 1
    [*CE1-LoopBack1] ip address 10.11.1.1 32
    [*CE1-LoopBack1] quit
    [*CE1] interface gigabitethernet1/0/0
    [*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
    [*CE1-GigabitEthernet1/0/0] quit
    [*CE1] bgp 65410
    [*CE1-bgp] peer 10.1.1.2 as-number 100
    [*CE1-bgp] network 10.11.1.1 32
    [*CE1-bgp] quit
    [*CE1] commit

    The configuration of CE2 is similar to the configuration of CE1. For configuration details, see the configuration file.

    After completing the configuration, run the display ip vpn-instance verbose command on the PEs to check VPN instance configurations. Check that each PE can ping its connected CE.

    If a PE has multiple interfaces bound to the same VPN instance, use the -a source-ip-address parameter to specify a source IP address when running the ping -vpn-instance vpn-instance-name -a source-ip-address dest-ip-address command to ping the CE that is connected to the remote PE. If the source IP address is not specified, the ping operation may fail.

    # Configure PE1.

    [~PE1] bgp 100
    [~PE1-bgp] ipv4-family vpn-instance vpna
    [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
    [*PE1-bgp-vpna] import-route direct
    [*PE1-bgp-vpna] advertise l2vpn evpn
    [*PE1-bgp-vpna] commit
    [~PE1-bgp-vpna] quit
    [~PE1-bgp] quit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see the configuration file.

    After completing the configuration, run the display bgp vpnv4 vpn-instance peer command on the PEs to check whether BGP peer relationships have been established between the PEs and CEs. If the Established state is displayed in the command output, the BGP peer relationships have been established successfully.

    The following example uses the peer relationship between PE1 and CE1.

    [~PE1] display bgp vpnv4 vpn-instance vpna peer
     BGP local router ID : 1.1.1.9
     Local AS number : 100
    
     VPN-Instance vpna, Router ID 1.1.1.9:
     Total number of peers : 1                 Peers in established state : 1
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down       State  PrefRcv
      10.1.1.1        4       65410       91       90     0 01:15:39 Established        1

  15. Verify the configuration.

    After completing the configuration, run the display ip routing-table vpn-instance command on each PE to check information about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table: vpna
             Destinations : 7        Routes : 7
    Destination/Mask    Proto  Pre  Cost     Flags NextHop         Interface
         10.1.1.0/24    Direct 0    0        D     10.1.1.2        GigabitEthernet2/0/0
         10.1.1.2/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
       10.1.1.255/32    Direct 0    0        D     127.0.0.1       GigabitEthernet2/0/0
        10.11.1.1/32     EBGP   255  0        RD    10.1.1.1        GigabitEthernet2/0/0
        10.22.2.2/32     IBGP   255  0        RD    3.3.3.9         policy200
          127.0.0.0/8   Direct 0    0        D     127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct 0    0        D     127.0.0.1       InLoopBack0

    Run the display ip routing-table vpn-instance vpna verbose command on each PE to check details about the loopback interface route toward a CE.

    The following example uses the command output on PE1.

    [~PE1] display ip routing-table vpn-instance vpna 10.22.2.2 verbose
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpna
    Summary Count : 1
    
    Destination: 10.22.2.2/32         
         Protocol: IBGP               Process ID: 0              
       Preference: 255                      Cost: 0              
          NextHop: 3.3.3.9             Neighbour: 3.3.3.9
            State: Active Adv Relied         Age: 01h18m38s           
              Tag: 0                    Priority: low            
            Label: 48180                 QoSInfo: 0x0           
       IndirectID: 0x10000B9            Instance:                                 
     RelayNextHop: 0.0.0.0             Interface: policy200
         TunnelID: 0x000000003200000041 Flags: RD

    The command output shows that the VPN route has successfully recursed to the specified SR-MPLS TE Policy.

    CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.

    [~CE1] ping -a 10.11.1.1 10.22.2.2
      PING 10.22.2.2: 56  data bytes, press CTRL_C to break
        Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
        Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
        Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
        Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
      --- 10.22.2.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 34/48/72 ms  

Configuration Files

  • PE1 configuration file

    #
    sysname PE1
    #
    te attribute enable
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 100:1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity evpn
      vpn-target 111:1 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 1.1.1.9
    #               
    mpls            
    #               
    segment-routing 
     on-demand color 200
      dynamic-computation-seq pcep
      constraint-template tp200
      candidate-path preference 100
    #               
    isis 1          
     is-level level-1
     cost-style wide
     bgp-ls enable level-1
     network-entity 10.0000.0000.0001.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.1 255.255.255.0
     isis enable 1  
     te metric 20
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.1.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.11.1.1 255.255.255.0
     isis enable 1  
     te metric 10
    #               
    interface GigabitEthernet4/0/0
     undo shutdown  
     ip address 10.3.1.1 255.255.255.0
     isis enable 1
    #               
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 10
    #               
    bgp 100         
     peer 3.3.3.9 as-number 100
     peer 3.3.3.9 connect-interface LoopBack1
     peer 10.3.1.2 as-number 100
     #              
     ipv4-family unicast
      undo synchronization
      peer 3.3.3.9 enable
      peer 10.3.1.2 enable
     #
     link-state-family unicast
      peer 10.3.1.2 enable
     #              
     l2vpn-family evpn
      peer 3.3.3.9 enable
      peer 3.3.3.9 route-policy color100 export
      peer 3.3.3.9 advertise irb
     #              
     ipv4-family vpn-instance vpna
      import-route direct
      peer 10.1.1.1 as-number 65410
      advertise l2vpn evpn
    #
    segment-routing policy constraint-template tp200
     metric-type te
     max-cumulation te 20
    # 
    pce-client
     capability segment-routing
     connect-server 10.10.10.10
    #
    route-policy color100 permit node 1
     apply extcommunity color 0:100
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    evpn source-address 1.1.1.9
    #
    return
  • P1 configuration file

    #
    sysname P1
    #
    te attribute enable
    #
    mpls lsr-id 2.2.2.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0002.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.11.1.2 255.255.255.0
     isis enable 1  
     te metric 10
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.12.1.1 255.255.255.0
     isis enable 1
     te metric 10
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.23.1.1 255.255.255.0
     isis enable 1 
     te metric 100
    #               
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 20
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    te attribute enable
    #
    ip vpn-instance vpna
     ipv4-family
      route-distinguisher 200:1
      apply-label per-instance
      vpn-target 111:1 export-extcommunity evpn
      vpn-target 111:1 import-extcommunity evpn
      tnl-policy p1 evpn
      evpn mpls routing-enable
    #
    mpls lsr-id 3.3.3.9
    #               
    mpls   
    #               
    segment-routing
     on-demand color 100
      dynamic-computation-seq pcep
      constraint-template tp100
      candidate-path preference 100
    #               
    isis 1          
     is-level level-1
     cost-style wide
     bgp-ls enable level-1
     network-entity 10.0000.0000.0003.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.14.1.2 255.255.255.0
     isis enable 1  
     te metric 20
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip binding vpn-instance vpna
     ip address 10.2.1.2 255.255.255.0
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.12.1.2 255.255.255.0
     isis enable 1  
     te metric 10
    #               
    interface GigabitEthernet4/0/0
     undo shutdown  
     ip address 10.4.1.1 255.255.255.0
     isis enable 1 
    #               
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 30
    #               
    bgp 100         
     peer 1.1.1.9 as-number 100
     peer 1.1.1.9 connect-interface LoopBack1
     peer 10.4.1.2 as-number 100 
     #              
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.9 enable
      peer 10.4.1.2 enable
     #
     link-state-family unicast
      peer 10.4.1.2 enable
     #              
     l2vpn-family evpn
      peer 1.1.1.9 enable
      peer 1.1.1.9 route-policy color200 export
      peer 1.1.1.9 advertise irb
     #              
     ipv4-family vpn-instance vpna
      import-route direct
      peer 10.2.1.1 as-number 65420
      advertise l2vpn evpn
    #
    segment-routing policy constraint-template tp100
     metric-type te
     max-cumulation te 20
    # 
    pce-client
     capability segment-routing
     connect-server 10.10.10.10
    #
    route-policy color200 permit node 1
     apply extcommunity color 0:200
    #               
    tunnel-policy p1
     tunnel select-seq sr-te-policy load-balance-number 1 unmix
    #
    evpn source-address 3.3.3.9
    #
    return
  • P2 configuration file

    #
    sysname P2
    #
    te attribute enable
    #
    mpls lsr-id 4.4.4.9
    #               
    mpls            
    #               
    segment-routing 
    #               
    isis 1          
     is-level level-1
     cost-style wide
     network-entity 10.0000.0000.0004.00
     traffic-eng level-1 
     segment-routing mpls
     segment-routing global-block 16000 23999
    #               
    interface GigabitEthernet1/0/0
     undo shutdown  
     ip address 10.13.1.2 255.255.255.0
     isis enable 1  
     te metric 20
    #               
    interface GigabitEthernet2/0/0
     undo shutdown  
     ip address 10.14.1.1 255.255.255.0
     isis enable 1 
     te metric 20 
    #               
    interface GigabitEthernet3/0/0
     undo shutdown  
     ip address 10.23.1.2 255.255.255.0
     isis enable 1
     te metric 100
    #               
    interface LoopBack1
     ip address 4.4.4.9 255.255.255.255
     isis enable 1  
     isis prefix-sid index 40
    #
    return
  • Controller configuration file

    #
    sysname Controller
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
    #
    interface GigabitEthernet2/0/0
     undo shutdown
     ip address 10.4.1.2 255.255.255.0
    #               
    interface LoopBack1
     ip address 10.10.10.10 255.255.255.255
    #
    bgp 100
     peer 10.3.1.1 as-number 100
     peer 10.4.1.1 as-number 100
     #
     ipv4-family unicast
      undo synchronization
      peer 10.3.1.1 enable
      peer 10.4.1.1 enable
     #
     link-state-family unicast
      peer 10.3.1.1 enable
      peer 10.4.1.1 enable
    #
    return
  • CE1 configuration file

    #
    sysname CE1
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.11.1.1 255.255.255.255
    #
    bgp 65410
     peer 10.1.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.11.1.1 255.255.255.255
      peer 10.1.1.2 enable
    #
    return
  • CE2 configuration file

    #
    sysname CE2
    #
    interface GigabitEthernet1/0/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    interface LoopBack1
     ip address 10.22.2.2 255.255.255.255
    #
    bgp 65420
     peer 10.2.1.2 as-number 100
     #
     ipv4-family unicast
      network 10.22.2.2 255.255.255.255
      peer 10.2.1.2 enable
    #
    return
Translation
Favorite
Download
Update Date:2024-04-01
Document ID:EDOC1100335698
Views:82562
Downloads:303
Average rating:0.0Points

Digital Signature File

digtal sigature tool