NetEngine 8000 F1A V800R022C00SPC600 Configuration Guide
MPLS TE Configuration
- MPLS TE Description
- Overview of MPLS TE
- MPLS TE Fundamentals
- Tunnel Optimization
- IP-Prefix Tunnel
- MPLS TE Reliability
- MPLS TE Security
- DS-TE
- Entropy Label
- Checking the Source Interface of a Static CR-LSP
- Static Bidirectional Co-routed LSPs
- Associated Bidirectional CR-LSPs
- CBTS
- P2MP TE
- Application Scenarios for MPLS TE
- Terminology for MPLS TE
- MPLS TE Configuration
- Overview of MPLS TE
- Configuration Precautions for MPLS TE
- Configuring Static CR-LSP
- Enabling MPLS TE
- (Optional) Configuring Link Bandwidth
- Configuring the MPLS TE Tunnel Interface
- (Optional) Configuring Global Dynamic Bandwidth Pre-Verification
- Configuring the Ingress of the Static CR-LSP
- (Optional) Configuring the Transit Node of the Static CR-LSP
- Configuring the Egress of the Static CR-LSP
- (Optional) Configuring a Device to Check the Source Interface of a Static CR-LSP
- Verifying the Static CR-LSP Configuration
- Configuring a Static Bidirectional Co-routed LSP
- Enabling MPLS TE
- (Optional) Configuring Link Bandwidth
- Configuring a Tunnel Interface on the Ingress
- (Optional) Configuring Global Dynamic BandwidthPre-verification
- Configuring the Ingress of a Static Bidirectional Co-routed LSP
- (Optional) Configuring a Transit Node of a Static Bidirectional Co-routed LSP
- Configuring the Egress of a Static Bidirectional Co-routed CR-LSP
- Configuring the Tunnel Interface on the Egress
- Verifying the Configuration of a Static Bidirectional Co-routed LSP
- Configuring an Associated Bidirectional CR-LSP
- Configuring CR-LSP Backup
- Configuring CR-LSP Backup Parameters
- (Optional) Configuring a Best-effort Path
- (Optional) Configuring a Traffic Switching Policy for a Hot-Standby CR-LSP
- (Optional) Configuring a Manual Switching Mechanism for a Primary/Hot-Standby CR-LSP
- (Optional) Configuring CSPF Fast Switching
- (Optional) Enabling the Coexistence of Rapid FRR Switching and MPLS TE HSB
- Verifying the CR-LSP Backup Configuration
- Configuring Static BFD for CR-LSP
- Configuring Dynamic BFD for CR-LSP
- Configuring an RSVP-TE Tunnel
- Enabling MPLS TE and RSVP-TE
- Configuring CSPF
- Configuring IGP TE (OSPF or IS-IS)
- (Optional) Configuring TE Attributes for a Link
- (Optional) Configuring an Explicit Path
- (Optional) Disabling TE LSP Flapping Suppression
- Configuring an MPLS TE Tunnel Interface
- (Optional) Configuring Soft Preemption for RSVP-TE Tunnels
- (Optional) Configuring Graceful Shutdown
- Verifying the RSVP-TE Tunnel Configuration
- Configuring an Automatic RSVP-TE Tunnel
- Enabling MPLS TE and RSVP-TE
- (Optional) Configuring CSPF
- Configuring IGP TE (OSPF or IS-IS)
- Configuring the Automatic RSVP-TE Tunnel Capability on a PCC
- Configuring Dynamic BFD For Initiated RSVP-TE LSP
- Configuring Dynamic BFD for Initiated RSVP-TE Tunnel
- (Optional) Enabling Traffic Statistics Collection for Automatic Tunnels
- Verifying the Automatic RSVP-TE Tunnel Configuration
- Adjusting RSVP Signaling Parameters
- Configuring Dynamic BFD for RSVP
- Configuring Self-Ping for RSVP-TE
- Configuring RSVP Authentication
- Configuring Whitelist Session-CAR for RSVP-TE
- Configuring Micro-Isolation Protocol CAR for RSVP-TE
- Configuring an RSVP GR Helper
- Configuring the Entropy Label for Tunnels
- Configuring an LSR to Deeply Parse IP Packets
- Enabling the Entropy Label Capability on the Egress of an LSP
- Configuring the Entropy Label for Global Tunnels
- (Optional) Configuring an Entropy Label Capability for a Tunnel in the Tunnel Interface View
- Verifying the Configuration of the Entropy Label for Tunnels
- Configuring the IP-Prefix Tunnel Function
- Configuring Dynamic Bandwidth Reservation
- Adjusting Parameters for Establishing an MPLS TE Tunnel
- Configuring an MPLS TE Explicit Path
- Setting Priority Values for an MPLS TE Tunnel
- Setting the Hop Limit for a CR-LSP
- Associating CR-LSP Establishment with the Overload Setting
- Configuring Route and Label Record
- Setting Switching and Deletion Delays
- Verifying the Configuration of Establishment of MPLS TE Tunnel
- Importing Traffic to an MPLS TE Tunnel
- Configuring Static BFD for TE
- Configuring MPLS TE Manual FRR
- Configuring MPLS TE Auto FRR
- Configuring MPLS Detour FRR
- Disabling MPLS Detour FRR
- Configuring a Tunnel Protection Group
- Configuring an MPLS TE Associated Tunnel Group
- Configuring Bandwidth Information Flooding for MPLS TE
- Configuring the Limit Rate of MPLS TE Traffic
- Configuring Tunnel Re-optimization
- Configuring Isolated LSP Computation
- Configuring Automatic Tunnel Bandwidth Adjustment
- Disabling Automatic Bandwidth Configuration for Physical Interfaces
- Disabling the Rerouting Function
- Locking the Tunnel Configuration
- Configuring P2MP TE Tunnels
- Enabling P2MP TE Globally
- (Optional) Disabling P2MP TE on an Interface
- (Optional) Setting Leaf Switching and Deletion Delays
- Configuring Leaf Lists
- Configuring a P2MP TE Tunnel Interface
- (Optional) Configuring a P2MP Tunnel Template
- (Optional) Configuring a P2MP TE Tunnel to Support Soft Preemption
- (Optional) Configuring the Reliability Enhancement Function for a P2MP Tunnel
- Verifying the P2MP TE Tunnel Configuration
- Configuring BFD for P2MP TE
- Configuring P2MP TE FRR
- Configuring P2MP TE Auto FRR
- Configuring DS-TE
- Maintaining MPLS TE
- Checking Connectivity of a TE Tunnel
- Checking a TE Tunnel Using NQA
- Checking Tunnel Error Information
- Deleting RSVP-TE Statistics
- Resetting the RSVP Process
- Deleting an Automatic Bypass Tunnel and Re-establishing a New One
- Loopback Detection for a Specified Static Bidirectional Co-Routed CR-LSP
- Enabling the Packet Loss-Free MPLS ECMP Switchback
- Configuration Examples for MPLS TE
- Example for Establishing a Static MPLS TE Tunnel
- Example for Configuring a Static Bidirectional Co-routed CR-LSP
- Example for Configuring an Associated Bidirectional Static CR-LSP
- Example for Configuring an RSVP-TE Tunnel
- Example for Configuring an RSVP-TE over GRE Tunnel
- Example for Configuring RSVP Authentication
- Example for Configuring the IP-Prefix Tunnel Function to Automatically Establish MPLS TE Tunnels in a Batch
- Example for Configuring the Affinity Attribute of an MPLS TE Tunnel
- Example for Configuring an Inter-area Tunnel
- Example for Configuring the Threshold for Flooding Bandwidth Information
- Example for Configuring MPLS TE Manual FRR
- Example for Configuring MPLS TE Auto FRR
- Example for Configuring MPLS Detour FRR
- Example for Configure a Hot-standby CR-LSP
- Example for Configuring a Tunnel Protection Group Consisting of Bidirectional Co-routed CR-LSPs
- Example for Configuring Isolated LSP Computation
- Example for Configuring Static BFD for CR-LSP
- Example for Configuring Dynamic BFD for CR-LSP
- Example for Configuring Static BFD for TE
- Example for Configuring BFD for RSVP
- Example for Configuring a P2MP TE Tunnel
- Example for Configuring the IETF DS-TE Mode (RDM)
- Example for Configuring CBTS in an L3VPN over TE Scenario
- Example for Configuring CBTS in an L3VPN over LDP over TE Scenario
- Example for Configuring CBTS in a VLL over TE Scenario
- Example for Configuring CBTS in a VPLS over TE Scenario
MPLS TE Description
Overview of MPLS TE
Multiprotocol Label Switching (MPLS) traffic engineering (TE) effectively schedules, allocates, and utilizes existing network resources to provide sufficient bandwidth and support for quality of service (QoS). MPLS TE helps carriers minimize expenditures without requiring hardware upgrades. Because MPLS TE is implemented based on MPLS, it is easy to deploy and maintain on existing networks. MPLS TE supports various reliability techniques, which help backbone networks achieve carrier and device-class reliability.
Definition
Function Module |
Description |
---|---|
Basic function |
Basic MPLS TE functions include basic MPLS TE settings and the tunnel establishment capability. |
Tunnel optimization |
Tunnel optimization allows existing tunnels to be reestablished over other paths if the topology is changed, or these tunnels can be reestablished using updated bandwidth if service bandwidth values are changed. |
Reliability |
MPLS TE provides various reliability functions, including path protection, local protection, and node protection. |
Security |
RSVP authentication is implemented to improve the security of the signaling protocol on the MPLS TE network. |
P2MP TE |
P2MP TE is a promising solution to multicast service transmission. It helps carriers provide high TE capabilities and increased reliability on an IP/MPLS backbone network and reduce network operational expenditure (OPEX). |
Purpose
TE techniques are common for carriers operating IP/MPLS bearer networks. These techniques can be used to prevent traffic congestion and uneven resource allocation. Take the network shown in Figure 1-2177 as an example.
A node on a conventional IP network selects the shortest path as an optimal route, regardless of other factors, for example, bandwidth. This easily causes the shortest path to be congested with traffic, whereas other available paths are idle.
Each Link on the network shown in Figure 1-2177 has a bandwidth of 100 Mbit/s and the same metric value. LSRA sends LSRJ traffic at 40 Mbit/s, and LSRG sends LSRJ traffic at 80 Mbit/s. Traffic from both routers travels through the shortest path LSRA (LSRG) → LSRB → LSRC → LSRD → LSRI → LSRJ that is calculated by an IGP. As a result, the path LSRA (LSRG) → LSRB → LSRC → LSRD → LSRI → LSRJ may be congested because of overload, whereas the path LSRA (LSRG) → LSRB → LSRE → LSRF → LSRH → LSRI → LSRJ is idle.
Network congestion is a major cause for backbone network performance deterioration. The network congestion is resulted from insufficient resources or locally induced by incorrect resource allocation. For the former, network device expansion can prevent the problem. For the later, TE is used to allocate some traffic to idle link so that traffic allocation is improved. TE dynamically monitors network traffic and loads on network elements and adjusts the parameters for traffic management, routing, and resource constraints in real time, which prevents network congestion induced by load imbalance.
IP traffic engineering: It controls network traffic by adjusting the metric of a path. This method eliminates congestion only on some links. Adjusting a metric is difficult on a complex network because a link change affects multiple routes.
ATM traffic engineering: It uses an overlay network model and sets up virtual connections to guide some traffic. The overlay model provides a virtual topology over the physical topology of a network, which facilitates proper traffic scheduling and QoS. However, the overlay model has high extra overhead, poor scalability, and high operation costs for carriers.
Benefits
- Provides bandwidth and QoS guarantee for service traffic on the network.
- Optimizes bandwidth resource distribution on the network.
- Establishes public network tunnels to isolate virtual private network (VPN) traffic.
- Is easy to deploy and maintain as it is implemented based on existing MPLS techniques.
- Provides various reliability functions to implement carrier- and device-class reliability.
MPLS TE Fundamentals
Technology Overview
Related Concepts
Concept |
Description |
---|---|
MPLS TE tunnel |
MPLS TE often associates multiple LSPs with a virtual tunnel interface, and such a group of LSPs is called an MPLS TE tunnel. An MPLS TE tunnel is uniquely identified by the following parameters:
|
CR-LSP |
LSPs in an MPLS TE tunnel are generally called constraint-based routed label switched paths (CR-LSPs). Unlike Label Distribution Protocol (LDP) LSPs that are established based on routing information, CR-LSPs are established based on bandwidth and path constraints in addition to routing information. |
MPLS TE Tunnel Establishment and Application
An MPLS TE tunnel is established using a series of protocol components, as shown in Table 1-918. They work in sequence during tunnel establishment.
No. |
Name |
Description |
---|---|---|
1 |
In addition to network topology information, TE requires network load information. MPLS TE introduces the information advertisement component by extending an existing IGP, so that TE information can be advertised. TE information includes the maximum link bandwidth, maximum reservable bandwidth, reserved bandwidth, and link colors. Each node collects TE information about all nodes in a local area and generates a traffic engineering database (TEDB). |
|
2 |
The path calculation component runs the Constraint Shortest Path First (CSPF) algorithm and uses data in the TEDB to calculate a path that satisfies specific constraints. Evolving from the Shortest Path First (SPF) algorithm, CSPF excludes nodes and links that do not satisfy specific constraints and uses SPF to calculate a path. |
|
3 |
The path establishment component establishes the following types of CR-LSPs:
|
|
4 |
The traffic forwarding component imports traffic to MPLS TE tunnels and forwards the traffic based on MPLS. The preceding three components are enough for setting up an MPLS TE tunnel. However, an MPLS TE tunnel cannot automatically import traffic after being set up. Instead, it requires the traffic forwarding component to import traffic to the tunnel. |
An MPLS TE network administrator only needs to configure link attributes based on link resource status and tunnel attributes based on service needs and network planning. MPLS TE can then automatically establish tunnels based on the configurations. After tunnels are set up and traffic import is configured, traffic can then be forwarded along tunnels.
Information Advertisement Component
The information advertisement component is used to advertise network resource information to all nodes, including ingresses, on an MPLS TE network, to determine the paths and nodes that MPLS TE tunnels pass through. In this way, TE can be implemented to control network traffic distribution, improving network resource utilization.
Related Concepts
Concept |
Description |
---|---|
Total link bandwidth |
Total bandwidth of a physical link, which needs to be manually configured. |
Maximum reservable bandwidth |
Maximum bandwidth that a link can reserve for an MPLS TE tunnel. The maximum reservable bandwidth must be lower than or equal to the total bandwidth of the link. Manually configure the maximum bandwidth according to the bandwidth usage of the link when using MPLS TE. |
TE metric |
A TE metric is used in TE tunnel path calculation, allowing the calculation process to be independent from IGP route-based path calculation. By default, the IGP metric is used as the TE metric. |
SRLG |
A shared risk link group (SRLG) is a set of links that share a common physical resource (such as a fiber). Links in an SRLG are at the same risk of faults. Specifically, if one of the links fails, other links in the SRLG also fail. SRLG is mainly used in hot-standby CR-LSP and TE FRR scenarios to enhance TE tunnel reliability. For details about SRLG, see SRLG. |
Link administrative group |
A link administrative group, also called a link color, is a 128-bit vector. Each bit can be associated or not with a desired meaning, such as link bandwidth, a performance parameter (such as the delay), or a management policy. The policy can be a traffic type (multicast for example) or a flag indicating that a link is used by an MPLS TE tunnel. The link administrative group attribute is used together with affinities to control the paths for tunnels. |
Contents to Be Advertised
Link status information: interface IP addresses, link types, and link metric values, which are collected by an IGP
Bandwidth information, such as total link bandwidth and maximum reservable bandwidth
TE metric: TE link metric, which is the same as the IGP metric by default
Link administrative group
SRLG
Advertisement Methods
When to Advertise Information
OSPF TE or IS-IS TE floods link information so that each node can save area-wide link information to a traffic engineering database (TEDB). Information flooding is triggered by the establishment of an MPLS TE tunnel, or one of the following conditions:
A specific IGP TE flooding interval elapses.
A link is activated or deactivated.
A CR-LSP fails to be established for an MPLS TE tunnel because no adequate bandwidth can be reserved.
Link attributes, such as the administrative group attribute or affinity attribute, change.
The link bandwidth changes.
When the available bandwidth of an MPLS interface changes, the system automatically updates information in the TEDB and floods it. When a lot of tunnels are to be established on a node, the node reserves bandwidth and frequently updates information in the TEDB and floods it. For example, the bandwidth of a link is 100 Mbit/s. If 100 TE tunnels, each with bandwidth of 1 Mbit/s, are established, the system floods link information 100 times.
To help suppress the frequency at which TEDB information is updated and flooded, the flooding is triggered based on either of the following conditions:
The proportion of the bandwidth reserved for an MPLS TE tunnel to the available bandwidth in the TEDB is greater than or equal to a specific threshold.
The proportion of the bandwidth released by an MPLS TE tunnel to the available bandwidth in the TEDB is greater than or equal to a specific threshold.
If either of the preceding conditions is met, an IGP floods link bandwidth information, and CSPF updates the TEDB.
Assume that the available bandwidth of a link is 100 Mbit/s and 100 TE tunnels, each with bandwidth of 1 Mbit/s, are established over the link. The flooding threshold is 10%. Figure 1-2179 shows the proportion of the bandwidth reserved for each MPLS TE tunnel to the available bandwidth in the TEDB.
Bandwidth flooding is not performed when tunnels 1 to 9 are created. After tunnel 10 is created, the bandwidth information (10 Mbit/s in total) on tunnels 1 to 10 is flooded. The available bandwidth is 90 Mbit/s. Similarly, no bandwidth information is flooded after tunnels 11 to 18 are created. After tunnel 19 is created, bandwidth information of tunnels 11 to 19 is flooded. The process repeats until tunnel 100 is established.
Results Obtained After Information Advertisement
Every node creates a TEDB in an MPLS TE area after OSPF TE or IS-IS TE floods bandwidth information.
TE parameters are advertised during the deployment of an MPLS TE network. Every node collects TE link information in the MPLS TE area and saves it in a TEDB. The TEDB
contains network link and topology attributes, including information about the constraints and bandwidth usage of each link. A node calculates the optimal path to another node in the MPLS TE area based on information in the TEDB. MPLS TE then establishes a CR-LSP over this optimal path.
- Similarities: The two types of databases both collect routing information flooded by IGPs.
- Differences: A TEDB contains TE information in addition to all the information in an LSDB. An IGP uses information in an LSDB to calculate the shortest path, while MPLS TE uses information in a TEDB to calculate the optimal path.
Path Calculation Component
IS-IS or OSPF uses SPF to calculate the shortest paths between nodes. MPLS TE uses CSPF to calculate the optimal path to a specific node. CSPF is derived from SPF and supports constraints.
Related Concepts
Concept |
Description |
---|---|
Tunnel bandwidth |
Tunnel bandwidth needs to be planned and configured based on services to be transmitted through a tunnel. When the tunnel is established, the configured bandwidth is reserved on each node on the tunnel, implementing bandwidth assurance. |
Affinity |
An affinity is a 128-bit vector that describes the links to be used by a TE tunnel. It is configured and implemented on the tunnel ingress, and used together with a link administrative group attribute to manage link selection. After a tunnel is assigned an affinity, a device compares the affinity with the administrative group attribute during link selection. Based on the comparison result, the device determines whether to select a link with specified attributes. The link selection criteria are as follows:
IncludeAny = the affinity attribute value ANDed with the subnet mask value; ExcludeAny = (–IncludeAny) ANDed with the subnet mask value; the administrative group value = the administrative group value ANDed with the subnet mask value. The following rules apply:
NOTE:
Understand specific comparison rules before deploying devices of different vendors because the comparison rules vary with vendors. A network administrator can use the link administrative group and affinities to control the paths over which MPLS TE tunnels are established. |
Explicit path |
An explicit path is used to establish a CR-LSP. Nodes to be included or excluded are specified on this path. Explicit paths are classified into the following types:
|
Hop limit |
Hop limit is a condition for path selection during CR-LSP establishment. Similar to the administrative group and affinity attributes, a hop limit defines the number of hops that a CR-LSP allows. |
CSPF Fundamentals
CSPF works based on the following parameters:
Tunnel attributes configured on an ingress to establish a CR-LSP
TEDB
A TEDB can be generated only after IGP TE is configured. On an IGP TE-incapable network, CR-LSPs are established based on IGP routes, but not CSPF calculation results.
CSPF Calculation Process
The CSPF calculation process is as follows:
Links that do not meet tunnel attribute requirements in the TEDB are excluded.
SPF calculates the shortest path to a tunnel destination based on TEDB information.
CSPF attempts to use the OSPF TEDB to establish a path for a CR-LSP by default. If a path is successfully calculated using OSPF TEDB information, CSPF completes calculation and does not use the IS-IS TEDB to calculate a path. If path calculation fails, CSPF attempts to use IS-IS TEDB information to calculate a path.
CSPF can be configured to use the IS-IS TEDB to calculate a CR-LSP path. If path calculation fails, CSPF uses the OSPF TEDB to calculate a path.
CSPF calculates the shortest path to a destination. If there are several shortest paths with the same metric, CSPF uses a tie-breaking policy to select one of them. The following tie-breaking policies for selecting a path are available:
Most-fill: selects a link with the highest proportion of used bandwidth to the maximum reservable bandwidth, efficiently using bandwidth resources.
Least-fill: selects a link with the lowest proportion of used bandwidth to the maximum reservable bandwidth, evenly using bandwidth resources among links.
Random: selects links randomly, allowing LSPs to be established evenly over links, regardless of bandwidth distribution.
The Most-fill and Least-fill modes are only effective when the difference in bandwidth usage between the two links exceeds 10%, such as 50% of link A bandwidth utilization and 45% of link B bandwidth utilization. The value is 5%. At this time, the Most-fill and Least-fill modes do not take effect, and the Random mode is still used.
Differences Between CSPF and SPF
CSPF is dedicated to calculating MPLS TE paths. It has similarities with SPF but they have the following differences:
CSPF calculates the shortest path between the ingress and egress, and SPF calculates the shortest path between a node and each of other nodes on a network.
CSPF uses metrics such as the bandwidth, link attributes, and affinity attributes, in addition to link costs, which are the only metric used by SPF.
CSPF does not support load balancing and uses three tie-breaking policies to determine a path if multiple paths have the same attributes.
Establishing a CR-LSP Using RSVP-TE
RSVP-TE is an extension to RSVP. RSVP is designed for the Integrated Service model and runs on every node of path for resource reservation. RSVP is a control protocol working at the transport layer, but does not transmit application data. It establishes or tears down LSPs using TE attributes carried in extended objects.
RSVP-TE has the following characteristics:
Unidirectional: RSVP-TE only takes effect on traffic that travels from the ingress to the egress.
Receive end-oriented: A receive end initiates a request to reserve resources and maintains resource reservation information.
Soft state-based: RSVP uses a soft state mechanism to maintain the resource reservation information.
RSVP-TE Messages
RSVP-TE messages are as follows:
Path message: used to request downstream nodes to distribute labels. A Path message records path information on each node through which the message passes. The path information is used to establish a path state block (PSB) on a node.
Resv message: used to reserve resources at each hop of a path. A Resv message carries information about resources to be reserved. Each node that receives the Resv message reserves resources based on reservation information carried in the message. The reservation information is used to establish a reservation state block (RSB) and to record information about distributed labels.
PathErr message: sent upstream by an RSVP node if an error occurs during the processing of a Path message. A PathErr message is forwarded by every transit node and arrives at the ingress.
ResvErr message: sent downstream by an RSVP node if an error occurs during the processing of a Resv message. A ResvErr message is forwarded by every transit node and arrives at the egress.
PathTear message: sent downstream by the ingress to delete information about the local state created on every node of the path.
ResvTear message: sent upstream by the egress to delete the local reserved resources assigned to a path. After receiving the ResvTear message, the ingress sends a PathTear message to the egress.
Process of Establishing an LSP
Figure 1-2185 shows the process of establishing a CR-LSP. The process is as follows:
The ingress configured with RSVP-TE creates a PSB and sends a Path message to transit nodes.
After receiving the Path message, the transit node processes and forwards this message, and creates a PSB.
After receiving the Path message, the egress creates a PSB, uses bandwidth reservation information in the Path message to generate a Resv message, and sends the Resv message to the ingress.
After receiving the Resv message, the transit node processes and forwards the Resv message and creates an RSB.
After receiving the Resv message, the ingress creates an RSB and confirms that the resources are reserved successfully.
The ingress successfully establishes a CR-LSP to the egress.
Soft State Mechanism
The soft state mechanism enables RSVP nodes to periodically send Path and Resv messages to synchronize states (including states in the PSB and RSB) between RSVP neighboring nodes or to resend RSVP messages that have been dropped. If an RSVP node does not receive an RSVP matching a specific state within a specified period of time, the RSVP node deletes the state from a state block.
A node can refresh a state in a state block and notifies other nodes of the refreshed state. In the tunnel re-optimization scenario, if a route changes, the ingress is about to establish a new LSP. RSVP nodes along the new path send Path messages downstream to initialize PSBs and receive Resv messages responding to create new RSBs. After the new path is established, the ingress sends a Tear message downstream to delete soft states maintained on nodes of the previous path.
Reservation Styles
A reservation style defines how a node reserves resources after receiving a request sent by an upstream node. The NetEngine 8000 F supports the following reservation styles:
Fixed filter (FF): defines a distinct bandwidth reservation for data packets from a particular transmit end.
Shared explicit (SE): defines a single reservation for a set of selected transmit ends. These senders share one reservation but assign different labels to a receive end.
RSVP Summary Refresh
RSVP summary refresh (Srefresh) function enables a node to send digests of RSVP Refresh messages to maintain RSVP soft states and respond to RSVP soft state changes, which reduces signaling packets used to maintain the RSVP soft states and optimizes bandwidth allocation.
Background
RSVP Refresh messages are used to synchronize path state block (PSB) and reservation state block (RSB) information between nodes. They can also be used to monitor the reachability between RSVP neighbors and maintain RSVP neighbor relationships. As the sizes of Path and Resv messages are larger, sending many messages to establish many CR-LSPs causes increased consumption of network resources. RSVP Srefresh can be used to address this problem.
Implementation
Message_ID extension and retransmission extension
The Srefresh extension builds on the Message_ID extension. According to the Message_ID extension mechanism defined in relevant standards, RSVP messages carry extended objects, including Message_ID and Message_ID_ACK objects. The two objects are used to confirm RSVP messages and support reliable RSVP message delivery.
The Message_ID object can also be used to provide the RSVP retransmission mechanism. For example, a node initializes a retransmission interval as Rf seconds after it sends an RSVP message carrying the Message_ID object. If the node receives no ACK message within Rf seconds, the node retransmits an RSVP message after (1 + Delta) x Rf seconds. The Delta determines the increased rate of the transmission interval set by the sender. The node keeps retransmitting the message until it receives an ACK message or the retransmission times reach the threshold (called a retransmission increment value).
Summary Refresh extension
The Summary Refresh extension supports Srefresh messages to update the RSVP status, without the transmission of standard Path or Resv messages.
Each Srefresh message carries a Message_ID object. Each object contains multiple messages IDs, each of which identifies a Path or Resv state to be refreshed. If a CR-LSP changes, its message ID value increases.
Only the state that was previously advertised by Path and Resv messages containing Message_ID objects can be refreshed using the Srefresh extension.
After a node receives an Srefresh message, the node compares the Message_ID with that saved in a local state block. If they match, the node does not change the state. If the Message_ID is greater than that saved in the local state block, the node sends a NACK message to the sender, refreshes the PSB or RSB based on the Path or Resv message, and updates the Message_ID.
Message_ID objects contain sequence numbers of Message_ID objects. If a CR-LSP changes, the associated Message_ID sequence number increases. When receiving an Srefresh message, the node compares the sequence number of the Message_ID with the sequence number of the Message_ID saved in the PSB. If they are the same, the node does not change the state; if the received sequence number is greater than the local one, the state has been updated.
RSVP Hello
The RSVP Hello extension can rapidly monitor the reachability of RSVP nodes. If an RSVP node becomes unreachable, TE FRR protection is triggered. The RSVP Hello extension can also monitor whether an RSVP GR neighboring node is in the restart process.
Background
RSVP Refresh messages are used to synchronize path state block (PSB) and reservation state block (RSB) information between nodes. They can also be used to monitor the reachability between RSVP neighbors and maintain RSVP neighbor relationships.
Using Path and Resv messages to monitor neighbor reachability delays a traffic switchover if a link fault occurs and therefore is slow. The RSVP Hello extension can address this problem.
Related Concepts
- RSVP Refresh messages: Although an MPLS TE tunnel is established using Path and Resv messages, RSVP nodes still send Path and Resv messages over the established tunnel to update the RSVP status. These Path and Resv messages are called RSVP Refresh messages.
- RSVP GR: ensures uninterrupted transmission on the forwarding plane while an AMB/SMB switchover is performed on the control plane. A GR helper assists a GR restarter in rapidly restoring the RSVP status.
- TE FRR: a local protection mechanism for MPLS TE tunnels. If a fault occurs on a tunnel, TE FRR rapidly switches traffic to a bypass tunnel.
Implementation
The principles of the RSVP Hello extension are as follows:
Hello handshake mechanism
LSRA and LSRB are directly connected on the network shown in Figure 1-2186.
If RSVP Hello is enabled on LSRA, LSRA sends a Hello Request message to LSRB.
After LSRB receives the Hello Request message and is also enabled with RSVP Hello, LSRB sends a Hello ACK message to LSRA.
After receiving the Hello ACK message, LSRA considers LSRB reachable.
Detecting neighbor loss
After a successful Hello handshake is implemented, LSRA and LSRB exchange Hello messages. If LSRB does not respond to three consecutive Hello Request messages sent by LSRA, LSRA considers router B lost and re-initializes the RSVP Hello process.
Detecting neighbor restart
If LSRA and LSRB are enabled with RSVP GR, and the Hello extension detects that LSRB is lost, LSRA waits for LSRB to send a Hello Request message carrying a GR extension. After receiving the message, LSRA starts the GR process on LSRB and sends a Hello ACK message to LSRB. After receiving the Hello ACK message, LSRB performs the GR process and restores the RSVP soft state. LSRA and LSRB exchange Hello messages to maintain the restored RSVP soft state.
There are two scenarios if a CR-LSP is set up between LSRs:
If GR is disabled and FRR is enabled, FRR switches traffic to a bypass CR-LSP after the Hello extension detects that the RSVP neighbor relationship is lost to ensure proper traffic transmission.
If GR is enabled, the GR process is performed.
Deployment Scenarios
The RSVP Hello extension applies to networks enabled with both RSVP GR and TE FRR.
Traffic Forwarding Component
The traffic forwarding component imports traffic to a tunnel and forwards traffic over the tunnel. Although the information advertisement, path selection, and path establishment components are used to establish a CR-LSP in an MPLS TE tunnel, a CR-LSP (unlike an LDP LSP) cannot automatically import traffic. The traffic forwarding component must be used to import traffic to the CR-LSP before it forwards traffic based on labels.
Static Route
Static route is the simplest method for directing traffic to a CR-LSP in an MPLS TE tunnel. A TE static route works in the same way as a common static route and has a TE tunnel interface as an outbound interface.
Auto Route
An Interior Gateway Protocol (IGP) uses an auto route related to a CR-LSP in a TE tunnel that functions as a logical link to calculate a path. The tunnel interface is used as an outbound interface in the auto route. The TE tunnel is considered a P2P link with a specified metric value. The following auto routes are supported:
IGP shortcut: A route related to a CR-LSP is not advertised to neighbor nodes, preventing other nodes from using the CR-LSP.
Forwarding adjacency: A route related to a CR-LSP is advertised to neighbor nodes, allowing these nodes to use the CR-LSP.
Forwarding adjacency allows tunnel information to be advertised based on IGP neighbor relationships.
If the forwarding adjacency is used, nodes on both ends of a CR-LSP must be in the same area.
The following example demonstrates the IGP shortcut and forwarding adjacency.
- The auto route is not used. LSRE uses LSRD as the next hop in a route to LSRA and a route to LSRB; LSRG uses LSRF as the next hop in a route to LSRA and a route to LSRB.
- The auto route is used. Either IGP shortcut or forwarding adjacency can be configured:
The IGP shortcut is used to advertise the route of Tunnel 1. LSRE uses LSRD as the next hop in the route to LSRA and the route to LSRB; LSRG uses Tunnel 1 as the outbound interface in the route to LSRA and the route to LSRB. LSRG, unlike LSRE, uses Tunnel 1 in IGP path calculation.
The forwarding adjacency is used to advertise the route of Tunnel 1. LSRE uses LSRG as the next hop in the route to LSRA and the route to LSRB; LSRG uses Tunnel 1 as the outbound interface in the route to LSRA and the route to LSRB. Both LSRE and LSRG use Tunnel 1 in IGP path calculation.
Policy-based Routing
The policy-based routing (PBR) allows the system to select routes based on user-defined policies, improving security and load balancing traffic. If PBR is enabled on an MPLS network, IP packets are forwarded over specific CR-LSPs based on PBR rules.
MPLS TE PBR, the same as IP unicast PBR, is implemented based on a set of matching rules and behaviors. The rules and behaviors are defined using an apply clause, in which the outbound interface is a specific tunnel interface. If packets do not match PBR rules, they are properly forwarded using IP; if they match PBR rules, they are forwarded over specific CR-LSPs.
Tunnel Policy
- Select-seq mode: The system selects tunnels for VPN traffic in the specified tunnel selection sequence.
- Tunnel binding mode: A CR-LSP is bound to a destination address in a tunnel policy. This policy applies only to CR-LSPs.
Priorities and Preemption
Priorities and preemption are used to allow TE tunnels to be established preferentially to transmit important services, preventing resource competition during tunnel establishment.
- Hard preemption: A CR-LSP with a higher priority can directly delete preempted resources assigned to a CR-LSP with a lower priority. Some traffic is dropped on the CR-LSP with a lower priority during the hard preemption process. The CR-LSP with a lower priority is immediately deleted after its resources are preempted.
- Soft preemption: A CR-LSP with a higher priority can directly preempt resources assigned to a CR-LSP with a lower priority, but the CR-LSP with a lower priority is not deleted immediately after its resources are preempted. During the soft preemption process, the bandwidth assigned to the CR-LSP with a lower priority gradually decreases to 0 kbit/s. Some traffic is forwarded while some may be dropped on the CR-LSP with a lower priority. The CR-LSP with a lower priority is deleted after the soft preemption timer expires.
CR-LSPs use setup and holding priorities to determine whether to preempt resources. Both the setup and holding priority values range from 0 to 7. The smaller the value, the higher the priority. If only the setup priority is configured, the value of the holding priority is equal to that of the setup priority. The setup priority must be lower than or equal to the holding priority for a tunnel.
The priority and preemption attributes are used in conjunction to determine resource preemption among tunnels. If multiple CR-LSPs are to be established, CR-LSPs with high priorities can be established by preempting resources. If resources (such as bandwidth) are insufficient, a CR-LSP with a higher setup priority can preempt resources of an established CR-LSP with a lower holding priority.
- Tunnel 1: established over the path LSRA → LSRF → LSRD. Its bandwidth is 155 Mbit/s, and its setup and holding priority values are 0.
- Tunnel 2: established over the path LSRB → LSRF → LSRC. Its bandwidth is 155 Mbit/s, and its setup and holding priority values are 7.
- If hard preemption is used, since Tunnel 1 has a higher priority than Tunnel 2, LSRF sends an RSVP message to tear down Tunnel 2. As a result, some traffic on Tunnel 2 is dropped if Tunnel 2 is transmitting traffic.
- In soft preemption mode, Tunnel 2 is reestablished along the path LSRB → LSRD → LSRE → LSRC if LSRB does not tear down original Tunnel 2 after receiving a Resv message from LSRF. Original Tunnel 2 is torn down after traffic switchover is complete.
Affinity Naming Function
The affinity naming function simplifies the configuration of tunnel affinities and link administrative group attributes. Using this function, you can query whether a tunnel affinity matches a link administrative group attribute.
Background
A tunnel affinity and a link administrative group attribute are 8-bit hexadecimal numbers. An IGP (IS-IS or OSPF) advertises the administrative group attribute to devices in the same IGP area. RSVP-TE advertises the tunnel affinity to downstream devices. CSPF on the ingress checks whether administrative group bits match affinity bits to determine whether a link can be used to establish an LSP.
Hexadecimal calculations are complex, and maintaining and querying tunnels established using hexadecimal calculations are difficult. To address this issue, the NetEngine 8000 F allows you to assign different names (such as colors) for the 128 bits in the affinity attribute. Naming affinity bits help verify that tunnel affinity bits match link administrative group bits, facilitating network planning and deployment.
Implementation
Bits in a link administrative group must also be configured with the same names as the affinity bits.
- IncludeAny: CSPF includes a link when calculating a path, if at least one link administrative group bit has the same name as an affinity bit.
- ExcludeAny: CSPF excludes a link when calculating a path, if any link administrative group bit has the same name as an affinity bit.
- IncludeAll: CSPF includes a link when calculating a path, only if each link administrative group bit has the same name as each affinity bit.
Usage Scenarios
The affinity naming function is used when CSPF calculates paths over which RSVP-TE establishes CR-LSPs.
Benefits
The affinity naming function allows you to easily and rapidly use affinity bits to control paths over which CR-LSPs are established.
Tunnel Optimization
Tunnel Re-optimization
MPLS TE tunnel re-optimization enables a TE tunnel to be automatically reestablished over new optimal paths when the MPLS network topology changes.
Background
A main function of MPLS TE tunnels is to optimize traffic distribution over a network. Generally, the initial bandwidth of an MPLS TE tunnel is configured based on the initial bandwidth requirement of services, and its path is calculated and set up based on the initial network status. However, a network topology changes in some cases, which may cause bandwidth wastes or require traffic distribution optimization. As such, MPLS TE tunnel re-optimization is required.
Implementation
Tunnel re-optimization allows the ingress to re-optimize a CR-LSP based on certain events so that the CR-LSP can be established over the optimal path with the smallest metric value.
If the fixed filter (FF) resource reservation style is used, tunnel re-optimization cannot be configured.
Tunnel re-optimization is performed based on tunnel path constraints. During path calculation for re-optimization, path constraints, such as explicit path constraints and bandwidth constraints, are also considered.
Automatic re-optimization
An interval at which a tunnel is re-optimized is configured on the ingress. When the interval elapses, CSPF attempts to calculate a new path. If the calculated path has a metric smaller than that of the existing CR-LSP, a new CR-LSP is set up over the new path. After the CR-LSP is successfully set up, the ingress instructs the forwarding plane to switch traffic to the new CR-LSP and tears down the original CR-LSP. Re-optimization is then complete. If the CR-LSP fails to be set up, traffic is still forwarded along the original CR-LSP.
Manual re-optimization
The re-optimization command is run in the user view to trigger re-optimization on the tunnel ingress.
The make-before-break mechanism is used to ensure uninterrupted service transmission during the re-optimization process. This means that a new CR-LSP must be established first. Traffic is switched to the new CR-LSP before the original CR-LSP is torn down.
Automatic Bandwidth Adjustment
Automatic bandwidth adjustment enables the ingress of an MPLS TE tunnel to dynamically update tunnel bandwidth after traffic changes and to reestablish the MPLS TE tunnel using changed bandwidth values, all of which optimizes bandwidth resource usage.
Background
MPLS TE tunnels are used to optimize traffic distribution over a network. Traffic that frequently changes wastes MPLS TE tunnel bandwidth; therefore, automatic bandwidth adjustment is used to prevent this waste. A bandwidth is initially set to meet the requirement for the maximum volume of services to be transmitted over an MPLS TE tunnel, to ensure uninterrupted transmission.
Related Concepts
Automatic bandwidth adjustment allows the ingress to dynamically detect bandwidth changes and periodically attempt to reestablish a tunnel with the needed bandwidth.
Variable |
Notation |
Description |
---|---|---|
Adjustment frequency |
A |
Interval at which bandwidth adjustment is performed. |
Sampling frequency |
B |
Interval at which traffic rates on a specific tunnel interface are sampled. This value takes the larger value in the mpls te timer auto-bandwidth command and the set flow-stat interval command. |
Existing bandwidth |
C |
Configured bandwidth. |
Target bandwidth |
D |
Updated bandwidth after adjustment. |
Threshold |
Threshold |
An average bandwidth is calculated after the sampling interval time elapses. If the ratio of the difference between the average bandwidth and actual bandwidth to the actual bandwidth exceeds a specific threshold, automatic bandwidth adjustment is triggered. |
Implementation
Samples traffic.
The ingress starts a bandwidth adjustment timer (A) and samples traffic at a specific interval (B seconds) to obtain the instantaneous bandwidth during each sampling period. The ingress records the instantaneous bandwidths.
Calculates an average bandwidth.
After timer A expires, the ingress uses the records to calculate an average bandwidth (D) to be used as a target bandwidth.
A device must accumulatively sample bandwidth values for at least two times within a configured interval time. If the device samples bandwidth values for less than two times within the specified interval, automatic bandwidth adjustment is not performed. The existing sampling times are counted in the next bandwidth adjustment interval.
Calculates a path.
The ingress runs CSPF to calculate a path with bandwidth D and establishes a new CR-LSP over that path.
Switches traffic to the new CR-LSP.
The ingress switches traffic to the new CR-LSP before tearing down the original CR-LSP.
The preceding procedure repeats each time automatic bandwidth adjustment is triggered. Bandwidth adjustment is not needed if traffic fluctuates below a specific threshold. The ingress calculates an average bandwidth after the sampling interval time elapses. The ingress performs automatic bandwidth adjustment if the ratio of the difference between the average and existing bandwidths to the existing bandwidth exceeds a specific threshold. The following inequality applies:
[ |(D - C)| / C ] x 100% > Threshold
Other Usage
- The ingress only samples traffic on a tunnel interface, and does not perform bandwidth adjustment.
- The upper and lower limits can be set to define a range, within which the bandwidth can fluctuate.
IP-Prefix Tunnel
The IP-prefix tunnel function enables the creation of MPLS TE tunnels in a batch, which helps simplify configuration and improve deployment efficiency.
Background
MPLS TE provides various TE and reliability functions, and MPLS TE applications increase. The complexity of MPLS TE tunnel configurations, however, also increases. Manually configuring full-meshed TE tunnels on a large network is laborious and time-consuming. To address the issues, the HUAWEI NetEngine 8000 F1A series implements the IP-prefix tunnel function. This function uses an IP prefix list to automatically establish a number of tunnels to specified destination IP addresses and applies a tunnel template that contains public attributes to these tunnels. MPLS TE tunnels that meet expectations can be established in a batch.
Benefits
The IP-prefix tunnel function allows you to establish MPLS TE tunnels in a batch. This function satisfies various configuration requirements, such as reliability requirements, and reduces TE network deployment workload.
Implementation
- Configure an IP prefix list that contains multiple destination IP addresses.
- Configure a tunnel template to set public attributes.
- Use the template to automatically establish MPLS TE tunnels to the specified destination IP addresses.
The IP-prefix tunnel function uses the IP prefix list to filter LSR IDs in the traffic engineering database (TEDB). Only the LSR IDs that match the IP prefix list can be used as destination IP addresses of MPLS TE tunnels that are to be automatically established. After LSR IDs in the TEDB are added or deleted, the IP-prefix tunnel function automatically creates or deletes tunnels, respectively. The tunnel template that the IP-prefix tunnel function uses contains various configured attributes, such as the bandwidth, priorities, affinities, TE FRR, CR-LSP backup, and automatic bandwidth adjustment. The attributes are shared by MPLS TE tunnels that are established in a batch.
MPLS TE Reliability
Make-Before-Break
The make-before-break mechanism prevents traffic loss during a traffic switchover between two CR-LSPs. This mechanism improves MPLS TE tunnel reliability.
Background
MPLS TE provides a set of tunnel update mechanisms, which prevents traffic loss during tunnel updates. In real-world situations, an administrator can modify the bandwidth or explicit path attributes of an established MPLS TE tunnel based on service requirements. An updated topology allows for a path better than the existing one, over which an MPLS TE tunnel can be established. Any change in bandwidth or path attributes causes a CR-LSP in an MPLS TE tunnel to be reestablished using new attributes and causes traffic to switch from the previous CR-LSP to the newly established CR-LSP. During the traffic switchover, the make-before-break mechanism prevents traffic loss that occurs if the traffic switchover is implemented more quickly than the path switchover.
Principles
Make-before-break is a mechanism that allows a CR-LSP to be established using changed bandwidth and path attributes over a new path before the original CR-LSP is torn down. It helps minimize data loss and additional bandwidth consumption. The new CR-LSP is called a modified CR-LSP. Make-before-break is implemented using the shared explicit (SE) resource reservation style.
The new CR-LSP competes with the original CR-LSP on some shared links for bandwidth. The new CR-LSP cannot be established if it fails the competition. The make-before-break mechanism allows the system to reserve bandwidth used by the original CR-LSP for the new CR-LSP, without calculating the bandwidth to be reserved. Additional bandwidth is used if links on the new path do not overlap the links on the original path.
In this example, the maximum reservable bandwidth on each link is 60 Mbit/s on the network shown in Figure 1-2190. A CR-LSP along the path LSRA → LSRB → LSRC → LSRD is established, with the bandwidth of 40 Mbit/s.
The path is expected to change to LSRA → LSRE → LSRC → LSRD to forward data because LSRE has a light load. The reservable bandwidth of the link between LSRC and LSRD is just 20 Mbit/s. The total available bandwidth for the new path is less than 40 Mbit/s. The make-before-break mechanism can be used in this situation.
The make-before-break mechanism allows the newly established CR-LSP over the path LSRA → LSRE → LSRC → LSRD to use the bandwidth of the original CR-LSP's link between LSRC and LSRD. After the new CR-LSP is established over the path, traffic switches to the new CR-LSP, and the original CR-LSP is torn down.
In addition to the preceding method, another method of increasing the tunnel bandwidth can be used. If the reservable bandwidth of a shared link increases to a certain extent, a new CR-LSP can be established.
In the example shown in Figure 1-2190, the maximum reservable bandwidth on each link is 60 Mbit/s. A CR-LSP along the path LSRA → LSRB → LSRC → LSRD is established, with the bandwidth of 30 Mbit/s.
The path is expected to change to LSRA → LSRE → LSRC → LSRD to forward data because LSRE has a light load, and the bandwidth is expected to increase to 40 Mbit/s. The reservable bandwidth of the link between LSRC and LSRD is just 30 Mbit/s. The total available bandwidth for the new path is less than 40 Mbit/s. The make-before-break mechanism can be used in this situation.
The make-before-break mechanism allows the newly established CR-LSP over the path LSRA → LSRE → LSRC → LSRD to use the bandwidth of the original CR-LSP's link between LSRC and LSRD. The bandwidth of the new CR-LSP is 40 Mbit/s, out of which 30 Mbit/s is released by the link between LSRC and LSRD. After the new CR-LSP is established, traffic switches to the new CR-LSP and the original CR-LSP is torn down.
Delayed Switchover and Deletion
If an upstream node on an MPLS network is busy but its downstream node is idle or an upstream node is idle but its downstream node is busy, a CR-LSP may be torn down before the new CR-LSP is established, causing a temporary traffic interruption.
To prevent this temporary traffic interruption, the switching and deletion delays are used together with the make-before-break mechanism. In this case, traffic switches to a new CR-LSP a specified delay time later after a new CR-LSP is established. The original CR-LSP is torn down a specified delay later after a new CR-LSP is established. The switching delay and deletion delay can be manually configured.
TE FRR
Traffic engineering (TE) fast reroute (FRR) protects links and nodes on MPLS TE tunnels. If a link or node fails, TE FRR rapidly switches traffic to a backup path, minimizing traffic loss.
Background
Generally, a link or node failure in an MPLS TE tunnel triggers a primary/backup CR-LSP switchover. During the switchover, IGP routes converge to a backup CR-LSP, and CSPF recalculates a path over which the primary CR-LSP can be reestablished. Traffic is dropped during this process.
TE FRR can be used to minimize traffic loss. It pre-establishes backup paths that bypass faulty links and nodes. If a link or node on an MPLS TE tunnel fails, traffic can be rapidly switched to a backup path to prevent traffic loss, without depending on IGP route convergence. In addition, when traffic is transmitted along the backup path, the ingress will initiate the reestablishment of the primary path.
Benefits
TE FRR provides carrier-class local protection capabilities for MPLS TE, improving the reliability of an entire network.
Related Concepts
Facility backup mode
One-to-one backup mode
Table 1-922 describes some concepts in TE FRR.
Concept |
Supported Protection Mode |
Description |
---|---|---|
Primary CR-LSP |
Facility backup One-to-one backup |
A primary CR-LSP that is protected. |
Bypass CR-LSP |
Facility backup |
A backup CR-LSP that can protect multiple primary CR-LSPs. A bypass CR-LSP and its primary CR-LSP belong to different tunnels. |
Detour CR-LSP |
One-to-one backup |
A backup CR-LSP that is automatically established on each node of a primary CR-LSP. A detour LSP and its primary CR-LSP belong to the same tunnel. |
PLR |
Facility backup One-to-one backup |
PLR is short for point of local repair. It is the ingress of a bypass or detour CR-LSP. It must reside on a primary CR-LSP, and can be the ingress or transit node of a primary CR-LSP, but cannot be the egress of a primary CR-LSP. |
MP |
Facility backup One-to-one backup |
MP is short for merge point. It is an aggregation point of a bypass or detour CR-LSP and a primary CR-LSP. It cannot be the ingress of a primary CR-LSP. |
DMP |
One-to-one backup |
DMP is short for detour merge point. It is an aggregation point of detour CR-LSPs. |
Classified By |
Type |
Facility Backup |
One-to-One Backup |
---|---|---|---|
Protected object |
Node protection |
If a PLR and an MP are not directly connected, a backup CR-LSP protects the direct link to the PLR and also nodes between the PLR and MP. Both the bypass CR-LSP in Figure 1-2191 and Detour CR-LSP1 in Figure 1-2192 provide node protection. |
|
Link protection |
If a PLR and an MP are directly connected, a backup CR-LSP only protects the direct link to the PLR. Detour CR-LSP2 in Figure 1-2192 provides link protection. |
||
Bandwidth guarantee |
Bandwidth protection |
It is recommended that the bandwidth of a bypass CR-LSP be less than or equal to the bandwidth of the primary CR-LSP. |
By default, a detour CR-LSP has the same bandwidth as its primary CR-LSP and provides bandwidth protection automatically for the primary CR-LSP. |
Non-bandwidth protection |
If no bandwidth is configured for a bypass CR-LSP, it only implements path protection for the primary CR-LSP. |
Not supported. |
|
Implementation |
Manual mode |
A bypass CR-LSP is manually configured. |
Not supported. |
Automatic mode |
Auto FRR-enabled nodes automatically establish bypass CR-LSPs. A node automatically establishes a bypass CR-LSP and binds it to a primary CR-LSP only if the primary CR-LSP requires FRR and the topology meets FRR requirements. |
All detour CR-LSPs are automatically established, not requiring manual configuration. |
In facility backup mode, an established bypass CR-LSP supports a combination of the above protection types. For example, a bypass CR-LSP can implement manual, node, and bandwidth protection.
Implementation
Facility backup mode
In this mode, TE FRR is implemented as follows:
Primary CR-LSP establishment
A primary CR-LSP is established in a way similar to that of an ordinary CR-LSP. The difference is that the ingress appends the following flags into the Session_Attribute object in a Path message: Local protection desired, Label recording desired, and SE style desired. If bandwidth protection is required, the "Bandwidth protection desired" flag is also added.
Figure 1-2193 TE FRR local protectionBypass CR-LSP binding
The process of searching for a proper bypass CR-LSP for a primary CR-LSP is called binding. Only the primary CR-LSP with the "Local protection desired" flag can trigger a binding process. The binding must be complete before a primary/bypass CR-LSP switchover is performed. During the binding, the node must obtain information about the outbound interface of the bypass CR-LSP, next hop label forwarding entry (NHLFE), LSR ID of the MP, label allocated by the MP, and protection type.
The PLR of the primary CR-LSP already knows the next hop (NHOP) and next-next hop (NNHOP). Link protection can be provided if the egress LSR ID of the bypass CR-LSP is the same as the NHOP LSR ID. Node protection can be provided if the egress LSR ID of the bypass CR-LSP is the same as the NNHOP LSR ID. For example, in Figure 1-2194, Bypass CR-LSP 1 protects a link, and Bypass CR-LSP 2 protects a node.If multiple bypass CR-LSPs are available on a node, the node selects a bypass CR-LSP based on the following factors in sequence: bandwidth/non-bandwidth protection, implementation mode, and protected object. Bandwidth protection takes precedence over non-bandwidth protection, node protection takes precedence over link protection, and manual protection takes precedence over automatic protection. If both Bypass CR-LSP 1 and Bypass CR-LSP 2 shown in Figure 1-2194 are manually configured and provide bandwidth protection, the primary CR-LSP selects Bypass CR-LSP 2 for binding. If Bypass CR-LSP 1 provides bandwidth protection but Bypass CR-LSP 2 provides only path protection, the primary CR-LSP selects Bypass CR-LSP 1 for binding.
After a bypass CR-LSP is successfully bound to the primary CR-LSP, the NHLFE of the primary CR-LSP is recorded. The NHLFE contains the NHLFE index of the bypass CR-LSP and the inner label assigned by the MP for the previous node. The inner label is used to guide traffic forwarding during FRR switching.
- Fault detection
- In link protection, a data link layer protocol is used to detect and advertise faults. The fault detection speed at the data link layer depends on link types.
- In node protection, a data link layer protocol is used to detect link faults. If no link fault occurs, RSVP Hello detection or BFD for RSVP is used to detect faults in protected nodes.
If a link or node fault is detected, FRR switching is triggered immediately.In node protection, only the link between the protected node and PLR is protected. The PLR cannot detect faults in the link between the protected node and MP.
Switchover
A switchover is a process that switches both service traffic and RSVP messages to a bypass CR-LSP and notifies the upstream node of the switchover when a primary CR-LSP fails. During the switchover, the MPLS label nesting mechanism is used. The PLR pushes the label that the MP assigns for the primary CR-LSP as the inner label, and then the label for the bypass CR-LSP as the outer label. The penultimate hop along the bypass CR-LSP removes the outer label from the packet and forwards the packet only with the inner label to the MP. As the inner label is assigned by the MP, it can forward the packet to the next hop on the primary CR-LSP.Assume that a primary CR-LSP and a bypass CR-LSP are set up. Figure 1-2195 describes the labels assigned by each node on the primary CR-LSP and forwarding actions. The bypass CR-LSP provides node protection. If LSRC or the link between LSRB and LSRC fails, traffic is switched to the bypass CR-LSP. During the switchover, the PLR LSRB swaps 1024 for 1022 and then pushes label 34 as an outer label. This ensures that the packet can be forwarded to the next hop after reaching LSRD. Figure 1-2196 shows the forwarding process.
Switchback
After the switchover, the ingress of the primary CR-LSP attempts to reestablish the primary CR-LSP. After the primary CR-LSP is successfully reestablished, service traffic and RSVP messages are switched back from the bypass CR-LSP to the primary CR-LSP. The reestablished CR-LSP is called a modified CR-LSP. In this process, TE FRR (including Auto FRR) adopts the make-before-break mechanism. With this mechanism, the original primary CR-LSP is torn down only after the modified CR-LSP is set up successfully.
One-to-one backup mode
In this mode, TE FRR is implemented as follows:
Primary CR-LSP establishment
The process of establishing a primary CR-LSP in one-to-one backup mode is similar to that in facility backup mode. The ingress appends the "Local protection desired", "Label recording desired", and "SE style desired" flags to the Session_Attribute object carried in a Path message.
Detour LSP establishment
When a primary CR-LSP is set up, each node, except the egress, on the primary CR-LSP assumes that it is a PLR and attempts to set up detour CR-LSPs to protect its downstream link or node. A qualified node establishes a detour CR-LSP based on CSPF calculation results and becomes the real PLR.
Each PLR has a known next hop (NHOP). A PLR establishes a detour CR-LSP to provide a specific type of protection:
Link protection is provided if the detour CR-LSP's egress LSR ID is the same as the NHOP LSR ID. (For example, Detour CR-LSP2 in Figure 1-2197 provides link protection.)
Node protection is provided if the detour CR-LSP's egress LSR ID is not the same as the NHOP LSR ID (that is, other nodes exist between the PLR and MP). (For example, Detour CR-LSP1 in Figure 1-2197 provides node protection.)
If a PLR supports detour CR-LSPs that provide both link and node protection, the PLR can establish only detour CR-LSPs that provide node protection.
Fault detection
In link protection, a data link layer protocol is used to detect and advertise faults. The fault detection speed at the data link layer depends on link types.
In node protection, a data link layer protocol is used to detect link faults. If no link fault occurs, RSVP Hello detection or BFD for RSVP is used to detect faults in protected nodes.
If a link or node fault is detected, FRR switching is triggered immediately.
In node protection, only the link between the protected node and PLR is protected. The PLR cannot detect faults in the link between the protected node and MP.
Switchover
A switchover is a process that switches both service traffic and RSVP messages to a detour CR-LSP and notifies the upstream node of the switchover when a primary CR-LSP fails. During a switchover in this mode, the MPLS label nesting mechanism is not used, and the label stack depth remains unchanged. This is different from that in facility backup mode.
In Figure 1-2197, a primary CR-LSP and two detour CR-LSPs are established. If no faults occur, traffic is forwarded along the primary CR-LSP based on labels. If the link between LSRB and LSRC fails, LSRB detects the link fault and switches traffic to Detour CR-LSP2 by swapping label 1024 for label 36 in a packet and sending the packet to LSRE. LSRE is the DMP of these two detour CR-LSPs. On LSRE, detour LSPs 1 and 2 merge into one detour CR-LSP (for example, Detour CR-LSP1). LSRE swaps the existing label for label 37 and sends the packet to LSRC. On LSRC, Detour CR-LSP1 overlaps with the primary CR-LSP. Therefore, LSRC uses the label of the primary CR-LSP and sends the packet to the egress.
Switchback
After the switchover, the ingress of the primary CR-LSP attempts to reestablish the primary CR-LSP, and service traffic and RSVP messages are switched back from the detour CR-LSP to the primary CR-LSP after it is established successfully. The reestablished CR-LSP is called a modified CR-LSP. In this process, TE FRR adopts the make-before-break mechanism. With this mechanism, the original primary CR-LSP is torn down only after the modified CR-LSP is set up successfully.
Other Functions
When TE FRR is in the FRR in-use state, the RSVP messages sent by the transmit interface do not carry the interface authentication TLV, and the receive interface does not perform interface authentication on the RSVP messages that do not carry the authentication TLV and are in the FRR in-use state. In this case, you can configure neighbor authentication.
Board removal protection: When the interface board where a primary CR-LSP's outbound interface resides is removed from a PLR, MPLS TE traffic is rapidly switched to a backup path. When the interface board is re-installed, MPLS TE traffic can be switched back to the primary path if the outbound interface of the primary path is still available. Board removal protection protects traffic on the primary CR-LSP's outbound interface of the PLR.
Without board removal protection, after an interface board on which a tunnel interface resides is removed, tunnel information is lost. To prevent tunnel information loss, ensure that the interface board to be removed does not have the following interfaces: primary CR-LSP's tunnel interface on the PLR, bypass CR-LSP's tunnel interface, bypass CR-LSP's outbound interface, or detour CR-LSP's outbound interface. Configuring a TE tunnel interface on a PLR's IPU is recommended.
After a TE tunnel interface is configured on the IPU, if the interface board on which the physical outbound interface of the primary CR-LSP resides is removed or fails, the outbound interface enters the stale state and the FRR-enabled primary CR-LSP that passes through the outbound interface is not deleted. When the interface board is re-inserted, the interface becomes available, and the primary CR-LSP reestablishment starts.
CR-LSP Backup
On one tunnel, a CR-LSP used to protect the primary CR-LSP is called a backup CR-LSP.
A backup CR-LSP protects traffic on important CR-LSPs. If a primary CR-LSP fails, traffic switches to a backup CR-LSP.
If the ingress detects that a primary CR-LSP is unavailable, the ingress switches traffic to a backup CR-LSP. After the primary CR-LSP recovers, traffic switches back. Traffic on the primary CR-LSP is protected.
CR-LSP backup is performed in either of the following modes:
Hot standby: A backup CR-LSP is set up immediately after a primary CR-LSP is set up. If the primary CR-LSP fails, traffic switches to the backup CR-LSP. If the primary CR-LSP recovers, traffic switches back to the primary CR-LSP. Hot-standby CR-LSPs support best-effort paths.
Ordinary backup: A backup CR-LSP is set up after a primary CR-LSP fails. If the primary CR-LSP fails, a backup CR-LSP is set up and takes over traffic from the primary CR-LSP. If the primary CR-LSP recovers, traffic switches back to the primary CR-LSP.
Table 1-924 lists differences between hot-standby and ordinary CR-LSPs.Table 1-924 Differences between hot-standby and ordinary CR-LSPsItem
Hot Standby
Ordinary Backup
When a backup CR-LSP is established
Created immediately after the primary CR-LSP is established.
Created only after the primary CR-LSP fails.
Path overlapping
Whether or not a primary CR-LSP overlaps a backup CR-LSP can be determined manually. If an explicit path is allowed for a backup CR-LSP, the backup CR-LSP can be set up over an explicit path.
Allowed to use the path of the primary CR-LSP in any case.
Whether or not a best-effort path is supported
Supported
Not supported
Best-effort path
The hot standby function supports the establishment of best-effort paths. If both the primary and hot-standby CR-LSPs fail, a best-effort path is established and takes over traffic.
As shown in Figure 1-2198, the primary CR-LSP uses the path PE1 -> P1 -> PE2, and the backup CR-LSP uses the path PE1 -> P2 -> PE2. If both the primary and backup CR-LSPs fail, PE1 triggers the setup of a best-effort path PE1 -> P2 -> P1 -> PE2.
A best-effort path does not provide reserved bandwidth for traffic. The affinity attribute and hop limit are configured as needed.
Hot-standby CR-LSP Switchover and Revertive Switchover Policy
- Automatic switchover: Traffic switches to a hot-standby CR-LSP from a primary CR-LSP when the primary CR-LSP goes Down. If the primary CR-LSP goes Up again, traffic automatically switches back to the primary CR-LSP. This is the default setting. You can determine whether to switch traffic back to the primary CR-LSP and set a revertive switchover delay time.
- Manual switchover: You can manually trigger a traffic switchover. Forcibly switch traffic from the primary CR-LSP to a hot-standby CR-LSP before some devices on a primary CR-LSP are upgraded or primary CR-LSP parameters are adjusted. After the required operations are complete, manually switch traffic back to the primary CR-LSP.
Path Overlapping
The path overlapping function can be configured for hot-standby CR-LSPs. This function allows a hot-standby CR-LSP to use links of a primary CR-LSP. The hot-standby CR-LSP protects traffic on the primary CR-LSP.
Comparison Between CR-LSP Backup and Other Features
The difference between CR-LSP backup and TE FRR is as follows:
CR-LSP backup is end-to-end path protection for an entire CR-LSP.
Fast reroute (FRR) is a partial protection mechanism used to protect a link or node on a CR-LSP. In addition, FRR rapidly responds to a fault and takes effect temporarily, which minimizes the switchover time.
CR-LSP hot standby and TE FRR are used together.
If a protected link or node fails, a point of local repair (PLR) switches traffic to a bypass tunnel. If the PLR is the ingress of the primary CR-LSP, the PLR immediately switches traffic to a hot-standby CR-LSP. If the PLR is a transit node of the primary CR-LSP, it uses a signaling to advertise fault information to the ingress of the primary CR-LSP, and the ingress switches traffic to the hot-standby CR-LSP. If the hot-standby CR-LSP is Down, the ingress keeps attempting to reestablish a hot-standby CR-LSP.
CR-LSP ordinary backup and TE FRR are used together.
The association is disabled.
If a protected link or node fails, a PLR switches traffic to a bypass tunnel. Only after both the primary and bypass CR-LSPs fail, the ingress of the primary CR-LSP attempts to establish an ordinary backup CR-LSP.
The association is enabled (FRR in Use).
If a protected link or node fails, a PLR switches traffic to a bypass tunnel. If the PLR is the ingress of the primary CR-LSP, the PLR attempts to establish an ordinary backup CR-LSP. If the ordinary backup CR-LSP is successfully established, the PLR switches traffic to the new CR-LSP. If the PLR is a transit node on the primary CR-LSP, the PLR advertises the fault to the ingress of the primary CR-LSP, the ingress attempts to establish an ordinary backup CR-LSP. If the ordinary backup CR-LSP is successfully established, the ingress switches traffic to the new CR-LSP.
If the ordinary backup CR-LSP fails to be established, traffic keeps traveling through the bypass CR-LSP.
Isolated CR-LSP Computation
Isolated CR-LSP computation enables a device to compute isolated primary and hot-standby CR-LSPs using the disjoint algorithm and constrained shortest path first (CSPF) algorithm simultaneously.
Background
Most live IP radio access networks (RANs) use ring topologies and have the access ring separated from the aggregation ring. Figure 1-2199 illustrates an E2E VPN bearer solution. On this network, an inter-layer MPLS TE tunnel is established between a cell site gateway (CSG) on the access ring and a radio service gateway (RSG) on the aggregation ring. The MPLS TE tunnel implements E2E VPN service transmission. To meet high reliability requirements for IP RAN bearer, hot standby is deployed for the TE tunnel, and the primary and hot-standby CR-LSPs needs to be separated.
However, the existing CSPF algorithm used by TE selects a CR-LSP with the smallest link metric and cannot automatically calculate separated primary and hot-standby CR-LSPs. Assume that the TE metric of each link is as shown in Figure 1-2199. CSPF calculates the primary CR-LSP as CSG-ASG1-ASG2-RSG, but cannot calculate a hot-standby CR-LSP that is completely separated from the primary CR-LSP. However, two completely separated CR-LSPs exist on the network: CSG-ASG1-RSG and CSG-ASG2-RSG.
Those two completely separated CR-LSPs can be obtained by specifying strict explicit paths. In real-world situations, nodes are frequently added or deleted on an IP RAN. The method of specifying strict explicit paths requires you to frequently modify path information, causing heavy O&M workload.
An ideal solution to the problem is to optimize CSPF path calculation so that CSPF can automatically calculate separated primary and hot-standby CR-LSPs. To achieve this purpose, isolated CR-LSP computation is introduced.
Implementation
Isolated CR-LSP computation uses the disjoint algorithm to optimize CSPF path calculation. On the network shown in Figure 1-2200, before the disjoint algorithm is used, CSPF selects CR-LSPs based on link metrics. It calculates LSRA-LSRB-LSRC-LSRD as the primary CR-LSP, and then LSRA-LSRC-LSRD as the hot-standby CR-LSP if the hot-standby overlap-path function is configured. These CR-LSPs, however, are not completely separated.
After the disjoint algorithm is used, CSPF calculates the primary and backup CR-LSPs at the same time and excludes the paths that may cause overlapping. Two completely separated CR-LSPs can then be calculated, with the primary CR-LSP being LSRA-LSRB-LSRD, and the hot-standby CR-LSP being LSRA-LSRC-LSRD.
CSPF calculates separate primary and hot-standby CR-LSPs only when the network topology permits. If there are no two completely separate CR-LSPs, CSPF calculates the primary and hot-standby CR-LSPs based on the original CSPF algorithm.
The disjoint algorithm is mutually exclusive with the explicit path and hop limit. Ensure that these features are not deployed before enabling the disjoint algorithm. If this algorithm has been enabled, these features cannot be deployed.
After you enable the disjoint algorithm, the shared risk link group (SRLG), if configured, becomes ineffective.
If an affinity constraint is configured, the disjoint algorithm takes effect only when the primary and backup CR-LSPs have the same affinity property or no affinity property is configured for the primary and backup CR-LSPs.
Application Scenarios
This feature applies to scenarios where RSVP-TE tunnels and hot standby are deployed.
Benefits
Isolated CR-LSP computation enables CSPF to isolate the primary and hot-standby CR-LSPs if possible. This feature brings the following benefits:
Improves the reliability of hot backup protection.
Reduces the maintenance workload as explicit path information does not need to be maintained.
Association Between CR-LSP Establishment and the IS-IS Overload
An association between constraint-based routed label switched path (CR-LSP) establishment and IS-IS overload enables the ingress to establish a CR-LSP that excludes overloaded IS-IS nodes. The association ensures that MPLS TE traffic travels properly along the CR-LSP, therefore improving CR-LSP reliability and service transmission quality.
Background
If a device is unable to store new link state protocol data units (LSPs) or use LSPs to update its link state database (LSDB) information, the device will calculate incorrect routes, causing forwarding failures. The IS-IS overload function enables the device to set the device to the IS-IS overload state to prevent such forwarding failures. By configuring the ingress to establish a CR-LSP that excludes the overloaded IS-IS device, the association between CR-LSP establishment and the IS-IS overload function helps the CR-LSP reliably transmit MPLS TE traffic.
Related Concepts
IS-IS overload state
When a device cannot store new LSPs or use LSPs to update its LSDB information using LSPs, the device will incorrectly calculate IS-IS routes. In this situation, the device will enter the overload state. For example, an IS-IS device becomes overloaded if its memory resources decrease to a specified threshold or if an exception occurs on the device. A device can be manually configured to enter the IS-IS overload state.
Implementation
- If RT3 enters the IS-IS overload state, IS-IS propagates packets carrying overload information in the IS-IS area.
- RT1 determines that RT3 is overloaded and re-calculates the CR-LSP destined for RT2.
- RT1 calculates a new path RT1 -> RT4 - >RT2, which bypasses the overloaded IS-IS node. Then RT1 establishes a new CR-LSP along this path.
- After the new CR-LSP is established, RT1 switches traffic from the original CR-LSP to the new CR-LSP, ensuring service transmission quality.
SRLG
The shared risk link group (SRLG) functions as a constraint that is used to calculate a backup path in the scenario where CR-LSP hot standby or TE FRR is used. This constraint helps prevent backup and primary paths from overlapping over links with the same risk level, improving MPLS TE tunnel reliability as a consequence.
Background
The primary tunnel is established over the path PE1 → P1 → P2 → PE2 on the network shown in Figure 1-2202. The link between P1 and P2 is protected by a TE FRR bypass tunnel established over the path P1 → P3 → P2.
In the lower part of Figure 1-2202, core nodes P1, P2, and P3 are connected using a transport network device. They share some transport network links marked in yellow. If a fault occurs on a shared link, both the primary and FRR bypass tunnels are affected, causing an FRR protection failure. An SRLG can be configured to prevent the FRR bypass tunnel from sharing a link with the primary tunnel, ensuring that FRR properly protects the primary tunnel.
Related Concepts
An SRLG is a set of links at the same risk of faults. If a link in an SRLG fails, other links also fail. If a link in this group is used by a hot-standby CR-LSP or FRR bypass tunnel, the hot-standby CR-LSP or FRR bypass tunnel cannot provide protection.
Implementation
An SRLG link attribute is a number and links with the same SRLG number are in a single SRLG.
Interior Gateway Protocol (IGP) TE advertises SRLG information to all nodes in a single MPLS TE domain. The constrained shortest path first (CSPF) algorithm uses the SRLG attribute together with other constraints, such as bandwidth, to calculate a path.
The MPLS TE SRLG works in either of the following modes:
Strict mode: The SRLG attribute is a necessary constraint used by CSPF to calculate a path for a hot-standby CR-LSP or an FRR bypass tunnel.
Preferred mode: The SRLG attribute is an optional constraint used by CSPF to calculate a path for a hot-standby CR-LSP or FRR bypass tunnel. For example, if CSPF fails to calculate a path for a hot-standby CR-LSP based on the SRLG attribute, CSPF recalculates the path, regardless of the SRLG attribute.
Usage Scenario
The SRLG attribute is used in either the TE FRR or CR-LSP hot-standby scenario.
Benefits
The SRLG attribute limits the selection of a path for a hot-standby CR-LSP or an FRR bypass tunnel, which prevents the primary and bypass tunnels from sharing links with the same risk level.
MPLS TE Tunnel Protection Group
A tunnel protection group implements E2E MPLS TE tunnel protection. If a working tunnel in a protection group fails, traffic quickly switches to a protection tunnel, minimizing traffic interruptions.
Related Concepts
As shown in Figure 1-2203, concepts related to a tunnel protection group are as follows:
Working tunnel: a tunnel to be protected.
Protection tunnel: a tunnel that protects a working tunnel.
Protection switchover: quickly switches traffic from a faulty working tunnel to a protection tunnel in a tunnel protection group, which improves network reliability.
A primary tunnel (tunnel-1) and a protection tunnel (tunnel-2) are established on LSRA on the network shown in Figure 1-2203.
On LSRA (ingress), tunnel-2 is configured as a protection tunnel for tunnel-1 (primary tunnel). If the ingress detects a fault in tunnel-1, traffic switches to tunnel-2, and LSRA attempts to reestablish tunnel-1. If tunnel-1 is successfully established, LSRA determines whether to switch traffic back to the primary tunnel based on the configured policy.
Implementation
An MPLS TE tunnel protection group uses a pre-configured protection tunnel to protect traffic on the working tunnel to improve tunnel reliability. Therefore, networking planning needs to be performed before you deploy MPLS TE tunnel protection groups. To ensure the improved performance of the protection tunnel, the protection tunnel must exclude links and nodes through which the working tunnel passes.
Table 1-925 describes the implementation of a tunnel protection group.
Sequence Number |
Process |
Description |
---|---|---|
1 |
Establishment |
The working and protection tunnels must have the same ingress and destination address. The tunnel establishment process is the same as that of as an ordinary TE tunnel. The protection tunnel can use attributes that differ from those for the working tunnel. To implement better protection, ensure that the working and protection tunnels are established over different paths as much as possible.
NOTE:
|
2 |
Binding |
After the tunnel protection group function is enabled for a working tunnel, the working tunnel and protection tunnel are bound to form a tunnel protection group based on the tunnel ID of the protection tunnel. |
3 |
Fault detection |
MPLS OAM/MPLS-TP OAM is used to detect faults in an MPLS TE tunnel protection group, so that protection switching can be quickly triggered. |
4 |
Protection switching |
The tunnel protection group supports either of the following protection switching modes:
An MPLS TE tunnel protection group only supports bidirectional switching. Specifically, if a traffic switchover is performed for traffic in one direction, a traffic switchover is also performed for traffic in the opposite direction. |
5 |
Switchback |
After protection switching is complete, the system attempts to reestablish the working tunnel. If the working tunnel is successfully established, the system determines whether to switch traffic back to the working tunnel according to the configured switchback policy. |
Differences Between CR-LSP Backup and a Tunnel Protection Group
Item |
CR-LSP Backup |
Tunnel Protection Group |
---|---|---|
Protected object |
Primary and backup CR-LSPs are established for the same tunnel. A backup CR-LSP protects traffic on a primary CR-LSP. |
In a tunnel protection group, one tunnel protects another. |
TE FRR |
TE FRR protection is supported only by the primary CR-LSP, not the backup CR-LSP. |
A tunnel protection group depends on a reverse LSP, and a reverse LSP does not support TE FRR. Therefore, tunnels in a tunnel protection group do not support TE FRR. |
LSP attributes |
The primary and backup CR-LSPs have the same attributes (such as bandwidth, setup priority, and hold priority), except for the TE FRR attribute. |
The attributes of the tunnels in the protection group are independent of each other. For example, a protection tunnel without bandwidth can protect a working tunnel requiring bandwidth protection. |
BFD for TE CR-LSP
BFD for TE is an end-to-end rapid detection mechanism used to rapidly detect faults in the link of an MPLS TE tunnel. BFD for TE supports BFD for TE tunnel and BFD for TE CR-LSP. This section describes BFD for TE CR-LSP only.
Traditional detection mechanisms, such as RSVP Hello and Srefresh, detect faults slowly. BFD rapidly sends and receives packets to detect faults in a tunnel. If a fault occurs, BFD triggers a traffic switchover to protect traffic.
On the network shown in Figure 1-2204, without BFD, if LSRE is faulty, LSRA and LSRF cannot immediately detect the fault due to the existence of Layer 2 switches, and the Hello mechanism will be used for fault detection. However, Hello mechanism-based fault detection is time-consuming.
To address these issues, BFD can be deployed. With BFD, if LSRE fails, LSRA and LSRF can detect the fault in a short time, and traffic can be rapidly switched to the path LSRA -> LSRB -> LSRD -> LSRF.
BFD for TE can quickly detect faults on CR-LSPs. After detecting a fault on a CR-LSP, BFD immediately notifies the forwarding plane of the fault to rapidly trigger a traffic switchover. BFD for TE is usually used together with the hot-standby CR-LSP mechanism.
A BFD session is bound to a CR-LSP and established between the ingress and egress. A BFD packet is sent by the ingress to the egress along the CR-LSP. Upon receipt, the egress responds to the BFD packet. The ingress can rapidly monitor the link status of the CR-LSP based on whether a reply packet is received.
After detecting a link fault, BFD reports the fault to the forwarding module. The forwarding module searches for a backup CR-LSP and switches service traffic to the backup CR-LSP. The forwarding module then reports the fault to the control plane.
On the network shown in Figure 1-2205, a BFD session is set up to detect faults on the link of the primary LSP. If a fault occurs on this link, the BFD session on the ingress immediately notifies the forwarding plane of the fault. The ingress switches traffic to the backup CR-LSP and sets up a new BFD session to detect faults on the link of the backup CR-LSP.
BFD for TE Deployment
The networking shown in Figure 1-2206 applies to both BFD for TE CR-LSP and BFD for hot-standby CR-LSP.
On the network shown in Figure 1-2206, a primary CR-LSP is established along the path LSRA -> LSRB, and a hot-standby CR-LSP is configured. A BFD session is set up between LSRA and LSRB to detect faults on the link of the primary CR-LSP. If a fault occurs on the link of the primary CR-LSP, the BFD session rapidly notifies LSRA of the fault. After receiving the fault information, LSRA rapidly switches traffic to the hot-standby CR-LSP to ensure traffic continuity.
BFD for TE Tunnel
BFD for TE supports BFD for TE tunnel and BFD for TE CR-LSP. This section describes BFD for TE tunnel.
The BFD mechanism detects communication faults in links between forwarding engines. The BFD mechanism monitors the connectivity of a data protocol on a bidirectional path between systems. The path can be a physical link or a logical link, for example, a TE tunnel.
BFD detects faults in an entire TE tunnel. If a fault is detected and the primary TE tunnel is enabled with virtual private network (VPN) FRR, a traffic switchover is rapidly triggered, which minimizes the impact on traffic.
On a VPN FRR network, a TE tunnel is established between PEs, and the BFD mechanism is used to detect faults in the tunnel. If the BFD mechanism detects a fault, VPN FRR switching is performed in milliseconds.
BFD for P2MP TE
BFD for P2MP TE applies to NG-MVPN and VPLS scenarios and rapidly detects P2MP TE tunnel failures. This function helps reduce the response time, improve network-wide reliability, and reduces traffic loss.
Benefits
No tunnel protection is provided in the NG-MVPN over P2MP TE function or VPLS over P2MP TE function. If a tunnel fails, traffic can only be switched using route change-induced hard convergence, which renders low performance. This function provides dual-root 1+1 protection for the NG-MVPN over P2MP TE function and VPLS over P2MP TE function. If a P2MP TE tunnel fails, BFD for P2MP TE rapidly detects the fault and switches traffic, which improves fault convergence performance and reduces traffic loss.
Principles
In Figure 1-2207, BFD is enabled on the root PE1 and the backup root PE2. Leaf nodes UPE1 to UEP4 are enabled to passively create BFD sessions. Both PE1 and PE2 sends BFD packets to all leaf nodes along P2MP TE tunnels. The leaf nodes receive the BFD packets transmitted only on the primary tunnel. If a leaf node receives detection packets within a specified interval, the link between the root node and leaf node is working properly. If a leaf node fails to receive BFD packets within a specified interval, the link between the root node and leaf node fails. The leaf node then rapidly switches traffic to a protection tunnel, which reduces traffic loss.
BFD for RSVP
When a Layer 2 device exists on a link between two RSVP nodes, BFD for RSVP can be configured to rapidly detect a fault in the link between the Layer 2 device and an RSVP node. If a link fault occurs, BFD for RSVP detects the fault and sends a notification to trigger TE FRR switching.
Background
Implementation
BFD for RSVP monitors RSVP neighbor relationships.
Unlike BFD for CR-LSP and BFD for TE that support multi-hop BFD sessions, BFD for RSVP establishes only single-hop BFD sessions between RSVP nodes to monitor the network layer.
BFD for RSVP, BFD for OSPF, BFD for IS-IS, and BFD for BGP can share a BFD session. When protocol-specific BFD parameters are set for a BFD session shared by RSVP and other protocols, the smallest values take effect. The parameters include the minimum intervals at which BFD packets are sent, minimum intervals at which BFD packets are received, and local detection multipliers.
Usage Scenario
BFD for RSVP applies to a network on which a Layer 2 device exists between the TE FRR point of local repair (PLR) on a bypass CR-LSP and an RSVP node on the primary CR-LSP.
Benefits
BFD for RSVP improves reliability on MPLS TE networks with Layer 2 devices.
RSVP GR
RSVP graceful restart (GR) is a status recovery mechanism supported by RSVP-TE.
RSVP GR is designed based on non-stop forwarding (NSF). If a fault occurs on the control plane of a node, the upstream and downstream neighbor nodes send messages to restore RSVP soft states, but the forwarding plane does not detect the fault and is not affected. This helps stably and reliably transmit traffic.
RSVP GR uses the Hello extension to detect the neighboring nodes' GR status. For more information about the Hello feature, see RSVP Hello.
RSVP GR principles are as follows:
On the network shown in Figure 1-2209, if the restarter performs GR, it stops sending Hello messages to its neighbors. If the GR-enabled helpers fail to receive three consecutive Hello messages, the helpers consider that the restarter is performing GR and retain all forwarding information. In addition, the interface board continues transmitting services and waits for the restarter to restore the GR status.
After the restarter restarts, if it receives Hello Path messages from helpers, it replies with Hello ACK messages. The types of the Hello messages returned by the upstream and downstream nodes on a tunnel are different:
If an upstream helper receives a Hello message, it sends a GR Path message downstream to the restarter.
If a downstream helper receives a Hello message, it sends a Recovery Path message upstream to the restarter.
If both the GR Path and Recovery Path messages are received, the restarter creates the new PSB associated with the CR-LSP. This restores information about the CR-LSP on the control plane.
If no Recovery Path message is sent and only a GR Path message is received, the restarter creates the new PSB associated with the CR-LSP based on the GR Path message. This restores information about the CR-LSP on the control plane.
The NetEngine 8000 F can only function as a GR Helper to help a neighbor node to complete RSVP GR.
Self-Ping
Self-ping is a connectivity check method for RSVP-TE LSPs.
Background
After an RSVP-TE LSP is established, the system sets the LSP status to up, without waiting for forwarding relationships to be completely established between nodes on the forwarding path. If service traffic is imported to the LSP before all forwarding relationships are established, some early traffic may be lost.
Self-ping can address this issue by checking whether an LSP can properly forward traffic.
Implementation
With self-ping enabled, an ingress constructs a UDP packet carrying an 8-byte session ID and adds an IP header to the packet to form a self-ping IP packet. Figure 1-2210 shows the format of a self-ping IP packet. In a self-ping IP packet, the destination IP address is the LSR ID of the ingress, the source IP address is the LSR ID of the egress, the destination port number is 8503, and the source port number is a variable ranging from 49152 to 65535.
Figure 1-2211 shows the self-ping process. In the example network, a P2P RSVP-TE tunnel is established from PE1 to PE2. Each of the numbers 100, 200, and 300 is an MPLS label assigned by a downstream node to its upstream node through RSVP Resv messages.
Self-ping is enabled on PE1 (ingress). After PE1 receives a Resv message, it constructs a self-ping IP packet and forwards the packet along the P2P RSVP-TE LSP. The outgoing label of the packet is 100, same as the label carried in the Resv message. After the self-ping IP packet is forwarded to PE2 (egress) hop by hop, the label is popped out, and the self-ping IP packet is restored.
The destination IP address of the packet is the LSR ID of PE1. PE2 searches the IP routing table for a route matching the destination IP address of the self-ping IP packet, and then sends the packet to PE1 along the matched route. After PE1 receives the self-ping IP packet, PE1 finds a P2P RSVP-TE LSP that matches the session ID carried in the packet. If a matching LSP is found, PE1 considers the LSP normal, sets the LSP status to up, and uses the LSP to transport traffic. The LSP self-ping test is then complete.
If PE1 does not receive the self-ping IP packet, it sends a new self-ping packet. If PE1 does not receive the self-ping IP packet before the detection period expires, it considers the P2P RSVP-TE LSP faulty and does not use the LSP to transport traffic.
Benefits
Self-ping detects the actual status of RSVP-TE LSPs, improving service reliability.
MPLS TE Security
RSVP Authentication
Principles
RSVP messages are sent over Raw IP with no security mechanism and expose themselves to being modified and expose devices to attacks. These packets are easy to modify, and a device receiving these packets is exposed to attacks.
- An unauthorized remote router sets up an RSVP neighbor relationship with the local router.
- A remote router constructs forged RSVP messages to set up an RSVP neighbor relationship with the local router and initiates attacks (such as maliciously reserving a large number of bandwidths) to the local router.
RSVP authentication parameters are as follows:
Key: The same key must be configured on two RSVP nodes before they perform RSVP authentication. A node uses this key to compute a digest for a packet to be sent based on the HMAC-MD5 (Keyed-Hashing for Message Authentication-Message Digest 5) algorithm. The packet carrying the digest as an integrity object is sent to a remote node. After receiving the packet, the remote node uses the same key and algorithm to compute a digest for the packet, and compares the computed digest with the one carried in the packet. If they are the same, the packet is accepted; if they are different, the packet is discarded.
HMAC-MD5 authentication has low security. In order to ensure better security, it is recommended to use Keychain authentication and use a more secure algorithm, such as HMAC-SHA-256.
Sequence number: In addition, each packet is assigned a 64-bit monotonically increasing sequence number before being sent, which prevents replay attacks. After receiving the packet, the remote node checks whether the sequence number is in an allowable window. If the sequence number in the packet is smaller than the lower limit defined in the window, the receiver considers the packet as a replay packet and discards it.
RSVP authentication also introduces handshake messages. If a receiver receives the first packet from a transmit end or packet mis-sequence occurs, handshake messages are used to synchronize the sequence number windows between the RSVP neighboring nodes.
Authentication lifetime: Network flapping causes an RSVP neighbor relationship to be deleted and created alternatively. Each time the RSVP neighbor relationship is created, the handshake process is performed, which delays the establishment of a CR-LSP. The RSVP authentication lifetime is introduced to resolve the problem. If a network flaps, a CR-LSP is deleted and created. During the deletion, the RSVP neighbor relationship associated with the CR-LSP is retained until the RSVP authentication lifetime expires.
Authentication Key Management
An HMAC-MD5 key is entered in either ciphertext or simple text on an RSVP interface or node. An HMAC-MD5 key has the following characteristics:
- A unique key must be assigned to each protocol.
- A single key is assigned to each interface and node. The key can be reconfigured but cannot be changed.
Key Authentication Configuration Scope
RSVP authentication keys can be configured on RSVP interfaces and nodes.
Local interface-based key
A local interface-based key is configured on an interface. The key takes effect on packets sent and received on this interface.
Neighbor node-based key
A neighbor node-based key is associated with the label switch router (LSR) ID of an RSVP node. The key takes effect on packets sent and received by the local node.
Neighbor address-based key
A neighbor address-based key is associated with the IP address of an RSVP interface. The key takes effect on the following packets:- Received packets with the source or next-hop address the same as the configured one
- Sent packets with the destination or next-hop address the same as the configured one
On an RSVP node, if the local interface-, neighbor node-, and neighbor address-based keys are configured, the neighbor address-based key takes effect; the neighbor node-based key takes effect if the neighbor address-based key fails; if the neighbor node-based key fails, the local interface-based key takes effect.
A specific RSVP authentication key is configured in a specific situation:
Neighbor node-key usage scenario:
If multiple links or hops exist between two RSVP nodes, only a neighbor node-based key needs to be configured, which simplifies the configuration. Two RSVP nodes authenticate all packets exchanged between them based on the key.
On a TE FRR network, packets are exchanged on an indirect link between a Point of Local Repair (PLR) node and a Merge Point (MP) node.
Local interface-based key usage scenario
Two RSVP nodes are directly connected and authenticate packets that are sent and received by their indirectly connected interfaces.
Neighbor address-key usage scenarios
- Two RSVP nodes cannot obtain the LSR ID of each other (for example, on an inter-domain network).
- The PLR and MP authenticate packets with specified interface addresses.
The keychain key is recommended.
DS-TE
Background
MPLS DS-TE Background
Advantages and disadvantages of MPLS TE
MPLS TE establishes an LSP by using available resources along links, which provides guaranteed bandwidth for specific traffic to prevent congestion occurring whenever the network is stable or failed. MPLS TE can also precisely control traffic paths so that existing bandwidth can be efficiently used.
MPLS TE, however, cannot provide differentiated QoS guarantees for traffic of different types. For example, there are two types of traffic: voice traffic and video traffic. Video frames may be retransmitted over a long period of time, so it may be required that video traffic be of a higher drop priority than voice traffic. MPLS TE does not classify traffic but integrates voice traffic and video traffic into the same drop priority.
Figure 1-2212 MPLS TEAdvantages and disadvantages of the MPLS DiffServ model
The MPLS DiffServ model classifies user services and performs differentiated traffic forwarding behaviors based on the service class, meeting various QoS requirements. The DiffServ model provides good scalability. It is because data streams of multiple services are mapped to only several CoSs and the amount of information to be maintained is in direct proportion to the number of data flow types, not the number of data flows.
The DiffServ model, however, can reserve resources only on a single node. QoS cannot be guaranteed for the entire path.
Disadvantages of using both MPLS DiffServ and MPLS TE
In some usage scenarios, using MPLS DiffServ or MPLS TE alone cannot meet requirements.
For example, a link carries both voice and data services. To ensure the quality of voice services, you must lower voice traffic delays. The sum delay is calculated based on this formula:
Sum delay = Delay in processing packets + Delay in transmitting packets
The delay in processing packets is calculated based on this formula:
Delay in processing packets = Forwarding delay + Queuing delay
When the path is specified, the delay in transmitting packets remains unchanged. To shorten the sum delay for voice traffic, reduce the delay in processing voice packets on each hop. When traffic congestion occurs, the more packets, the longer the queue, and the higher the delay in processing packets. Therefore, you must restrict the voice traffic on each link.
In Figure 1-2213, the bandwidth of each link is 100 Mbit/s, and all links share the same metric. Voice traffic is transmitted from R1 to R4 and from R2 to R4 at the rate of 60 Mbit/s and 40 Mbit/s, respectively. Traffic from R1 to R4 is transmitted along the LSP over the path R1 → R3 → R4, with the ratio of voice traffic being 60% between R3 and R4. Traffic from R2 to R4 is transmitted along the LSP over the path R2 → R3 → R7 → R4, with the ratio of voice traffic being 40% between R7 and R4.
If the link between R3 and R4 fails, as shown in Figure 1-2214, the LSP between R1 and R4 changes to the path R1 → R3 → R7 → R4 because this path is the shortest path with sufficient bandwidth. At this time, the ratio of voice traffic from R7 to R4 reaches 100%, causing the sum delay of voice traffic to prolong.
MPLS DiffServ-Aware Traffic Engineering (DS-TE) can resolve this problem.
What Is MPLS DS-TE?
MPLS DS-TE combines MPLS TE and MPLS DiffServ to provide QoS guarantee.
The class type (CT) is used in DS-TE to allocate resources based on the service class. To provide differentiated services, DS-TE divides the LSP bandwidth into one to eight parts, each part corresponding to a CoS. Such a collection of bandwidths of an LSP or a group of LSPs with the same service class are called a CT. DS-TE maps traffic with the same per-hop behavior (PHB) to one CT and allocates resources to each CT.
Defined by the IETF, DS-TE supports up to eight CTs, marked CTi, in which i ranges from 0 to 7.
If an LSP has a single CT, the LSP is also called a single-CT LSP.
Related Concepts
DS Field
To implement DiffServ, the ToS field in an IPv4 header is redefined in relevant standards and then called the Differentiated Services (DS) field. In the DS field, higher two bits are reserved and lower six bits are the DS codepoint (DSCP).
PHB
Per-Hop Behavior (PHB) is used to describe the next action on packets with the same DSCP. Commonly, PHB contains traffic traits, such as delay and packet loss rate.
The IETF defines the existing three standard PHBs: Expedited Forwarding (EF), Assured Forwarding (AF), and Best-Effort (BE). BE is the default PHB.
CT
To provide differentiated services, DS-TE divides the LSP bandwidth into one to eight parts, each part corresponding to a CoS. Such a collection of bandwidths of an LSP or a group of LSPs with the same service class are called a CT. A CT can transmit only the traffic of a CoS.
Defined by the IETF, DS-TE supports up to eight CTs, marked CTi, in which i ranges from 0 to 7.
TE-Class
A TE-class refers to a combination of a CT and a priority, in the format of <CT, priority>.
- Class-Type = CT0, setup-priority = 6, holding-priority = 6
- Class-Type = CT0, setup-priority = 7, holding-priority = 6
- Class-Type = CT0, setup-priority = 7, holding-priority = 7
The combination of setup-priority = 6 and hold-priority = 7 does not exist because the setup priority cannot be higher than the holding priority on a CR-LSP.
CTs and priorities can be in any combination. Therefore, there are 64 TE-classes theoretically. The NetEngine 8000 F supports a maximum of eight TE-classes, which are specified by users.
DS-TE Modes
DS-TE has two modes:
IETF mode: The IETF mode is defined by the IETF and supports 64 TE-classes by combining 8 CTs and 8 priorities. The NetEngine 8000 F supports up to 8 TE-classes.
Non-IETF mode: The non-IETF mode is not defined by the IETF and supports 8 TE-classes by combining CT0 and 8 priorities.
TE-class mapping table
The TE-class mapping table consists of a group of TE-classes. On the NetEngine 8000 F, the TE-class mapping table consists of a maximum of 8 TE-classes. It is recommended that the same TE-class mapping table be configured on all LSRs on an MPLS network.
BCM
The Bandwidth Constraints Model (BCM) is used to define the maximum number of Bandwidth Constraints (BCs), which CTs can use the bandwidth of each BC, and how to use BC bandwidth.
Implementation
Basic Implementation
A label edge router (LER) of a DiffServ domain sorts traffic into a small number of classes and marks class information in the Differentiated Service Code Point (DSCP) field of packets. When scheduling and forwarding packets, LSRs select Per-Hop Behaviors (PHBs) based on DSCP values.
The EXP field in the MPLS header carries DiffServ information. The key to implementing DS-TE is to map the DSCP value (with a maximum of 64 values) to the EXP field (with a maximum of 8 values). Relevant standards provide the following solutions:
- Label-Only-Inferred-PSC LSP (L-LSP): The discard priority is set in the EXP field, and the PHB type is determined by labels. During forwarding, labels determine the datagram forwarding path and allocate scheduling behaviors.
- EXP-Inferred-PSC LSP (E-LSP): The PHB type and the discard priority are set in the EXP field in an MPLS label. During forwarding, labels determine the datagram forwarding path, and the EXP field determines PHBs. E-LSPs are applicable to a network that supports no more than eight PHBs.
The NetEngine 8000 F supports E-LSPs. The mapping from the DSCP value to the EXP field complies with the definition of relevant standards. The mapping from the EXP field to the PHB is manually configured.
The class type (CT) is used in DS-TE to allocate resources based on the class of traffic. DS-TE maps traffic with the same PHB to one CT and allocates resources to each CT. Therefore, DS-TE LSPs are established based on CTs. Specifically, when DS-TE calculates an LSP, it needs to take CTs and obtainable bandwidth of each CT as constraints; when DS-TE reserves resources, it also needs to consider CTs and their bandwidth requirements.
IGP Extension
To support DS-TE, related standards extend an IGP by introducing an optional sub-TLV (Bandwidth Constraints sub-TLV) and redefining the original sub-TLV (Unreserved Bandwidth sub-TLV). This helps inform and collect information about reservable bandwidths of CTs with different priorities.
RSVP Extension
To implement IETF DS-TE, the IETF extends RSVP by defining a CLASSTYPE object for the Path message in related standards. For details about CLASSTYPE objects, see related standards.
After an LSR along an LSP receives an RSVP Path message carrying CT information, an LSP is established if resources are sufficient. After the LSP is successfully established, the LSR recalculates the reservable bandwidth of CTs with different priorities. The reservation information is sent to the IGP module to advertise to other nodes on the network.
BCM
Currently, the IETF defines the following bandwidth constraint models (BCMs):
Maximum Allocation Model (MAM): maps a BC to a CT. CTs do not share bandwidth resources. The BC mode ID of the MAM is 1.
Figure 1-2215 MAMIn the MAM, the sum of CTi LSP bandwidths does not exceed BCi (0≤ i ≤7) bandwidth; the sum of bandwidths of all LSPs of all CTs does not exceed the maximum reservable bandwidth of the link.
Assume that a link with the bandwidth of 100 Mbit/s adopts the MAM and supports three CTs (CT0, CT1, and CT2). BC0 (20 Mbit/s) carries CT0 (BE flows); BC1 (50 Mbit/s) carries CT1 (AF flows); BC2 (30 Mbit/s) carries CT2 (EF flows). In this case, the total reserved LSP bandwidths that are used to transmit BE flows cannot exceed 20 Mbit/s; the total reserved LSP bandwidths that are used to transmit AF flows cannot exceed 50 Mbit/s; the total reserved LSP bandwidths that are used to transmit EF flows cannot exceed 30 Mbit/s.
In the MAM, bandwidth preemption between CTs does not occur but some bandwidth resources may be wasted.
Russian Dolls Model (RDM): allows CTs to share bandwidth resources. The BC mode ID of the RDM is 0.
The bandwidth of BC0 is less than or equal to maximum reservable bandwidth of the link. Nesting relationships exist among BCs. As shown in Figure 1-2216, the bandwidth of BC7 is fixed; the bandwidth of BC6 nests the bandwidth of BC7; this relationship applies to the other BCs, and therefore the bandwidth of BC0 nests the bandwidth of all BCs. This model is similar to a Russian doll: A large doll nests a smaller doll and then this smaller doll nests a much smaller doll, and so on.
Assume that a link with the bandwidth of 100 Mbit/s adopts the RDM and supports three BCs. CT0, CT1, and CT2 are used to transmit BE flows, AF flows, and EF flows, respectively. The bandwidths of BC0, BC1, and BC2 are 100 Mbit/s, 50 Mbit/s, and 20 Mbit/s, respectively. In this case, the total LSP bandwidths that are used to transmit EF flows cannot exceed 20 Mbit/s; the total LSP bandwidths that are used to transmit EF flows and AF flows cannot exceed 50 Mbit/s; the total LSP bandwidths that are used to transmit BE, AF, and EF flows cannot exceed 100 Mbit/s.
The RDM allows bandwidth preemption among CTs. The preemption relationship among CTs is as follows. In the case of 0 ≤ m < n ≤7 and 0 ≤ i < j ≤ 7, CTi with the priority being m can preempt the bandwidth of CTi with priority n and the bandwidth of CTj with priority n. The total LSP bandwidths of CTi, however, does not exceed the bandwidth of BCi.
In the RDM, bandwidth resources are used efficiently.
Differences Between the IETF Mode and Non-IETF Mode
If bandwidth constraints or CT or CT reserved bandwidth is configured for a tunnel, the IETF and non-IETF modes cannot be switched to each other.
DS-TE Mode |
Non-IETF Mode |
IETF Mode |
---|---|---|
Bandwidth model |
N/A |
Supports the RDM and MAM. |
CT type |
Supports CT0. |
Supports CT0 through CT7. |
BC type |
Supports BC0. |
Supports BC0 through BC7. |
TE-class mapping table |
The TE-class mapping table can be configured but does not take effect. |
The TE-class mapping table can be configured and takes effect. |
IGP message |
The priority-based reservable bandwidth is carried in the Unreserved Bandwidth sub-TLV. |
The CT information is carried in the Unreserved Bandwidth sub-TLV and Bandwidth Constraints sub-TLV.
|
RSVP message |
The CT information is carried in the ADSPEC object. |
CT information is carried in the CLASSTYPE object. |
Entropy Label
An entropy label is used only to improve load balancing performance. It is not assigned through protocol negotiation and is not used to forward packets. Entropy labels are generated using IP information on ingresses. The entropy label value cannot be set to a reserved label value in the range of 0 to 15. The entropy label technique extends the RSVP protocol and uses a set of mechanisms to improve load balancing in traffic forwarding.
Background
As user networks and the scope of network services continue to expand, load-balancing techniques are used to improve bandwidth between nodes. If tunnels are used for load balancing, transit nodes (P) obtain IP content carried in MPLS packets as a hash key. If a transit node cannot obtain the IP content from MPLS packets, the transit node can only use the top label in the MPLS label stack as a hash key. The top label in the MPLS label stack cannot differentiate underlying protocols in detail. As a result, the top MPLS labels are not distinguished when being used as hash keys, resulting in load imbalance. Per-packet load balancing can be used to prevent load imbalance but results in packets being delivered out of sequence. This drawback adversely affects service experience. To address the problems, the entropy label feature can be configured to improve load balancing performance.
Implementation
An entropy label is generated on an ingress LSR, and it is only used to enhance the ability to load-balance traffic. To help the egress distinguish the entropy label generated by the ingress from application labels, an identifier label of 7 is added before an entropy label in the MPLS label stack.
The ingress LSR generates an entropy label and encapsulates it into the MPLS label stack. Before the ingress LSR encapsulates packets with MPLS labels, it can easily obtain IP or Layer 2 protocol data for use as a hash key. If the ingress LSR identifies the entropy label capability, it uses IP information carried in packets to compute an entropy label, adds it to the MPLS label stack, and advertises it to the transit node (P). The P uses the entropy label as a hash key to load-balance traffic and does not need to parse IP data inside MPLS packets.
The entropy label is negotiated using RSVP for improved load balancing. The entropy label is pushed into packets by the ingress and removed by the egress. Therefore, the egress needs to notify the ingress of the support for the entropy label capability.
- Egress: If the egress can parse an entropy label, the egress extends a RESV message by adding an entropy label capability TLV into the message. The egress sends the message to notify upstream nodes, including the ingress, of the local entropy label capability.
- Transit node: sends a RESV message to upstream nodes to transparently transmit the downstream node's entropy label capability. If load balancing is enabled, the RESV messages sent by the transit node carry the entropy label capability TLV only if all downstream nodes have the capability. If a transit node does not identify the entropy label capability TLV, the transit node transparently transmits the TLV by undergoing the unknown TLV process.
- Ingress: determines whether to add an entropy label into packets to improve load balancing based on the entropy label capability advertised by the egress.
Application Scenarios
- On the network shown in Figure 1-2217, entropy labels are used when load balancing is performed among transit nodes.
- On the network shown in Figure 1-2218, the entire tunnel has the entropy label capability only when both the primary and backup paths of the tunnel have the entropy label capability. An RSVP-TE session is established between each pair of directly connected devices (P1 through P4). On P1, for the tunnel to P3, the primary LSP is P1–>P3, and the backup LSP is P1–>P2–>P4–>P3. On P2, for the tunnel to P3, the primary LSP is P2–>P4–>P3, and the backup LSP is P2–>P1–>P3. In this example, P1 and P2 are the downstream nodes of each other's backup path. Assume that the entropy label capability is enabled on P3 and this device sends a RESV message carrying the entropy label capability to P1 and P4. After receiving the message, P1 checks whether the entire LSP to P3 has the entropy label capability. Because the path P1–>P2 does not have the entropy label capability, P1 considers that the LSP to P3 does not have the entropy label capability. As a result, P1 does not send a RESV message carrying the entropy label capability to P2. P2 performs the same check after receiving a RESV message carrying the entropy label capability from P4. If the path P2–>P1 does not have the entropy label capability, P2 also considers that the LSP to P3 does not have the entropy label capability.
- The entropy label feature applies to public network MPLS tunnels in service scenarios such as IPv4/IPv6 over MPLS, L3VPNv4/v6 over MPLS, VPLS/VPWS over MPLS, and EVPN over MPLS.
Benefits
Entropy labels help achieve more even load balancing.
Checking the Source Interface of a Static CR-LSP
A device uses the static CR-LSP's source interface check function to check whether the inbound interface of labeled packets is the same as that of a configured static CR-LSP. If the inbound interfaces match, the device forwards the packets. If the inbound interfaces do not match, the device discards the packets.
Background
A static CR-LSP is established using manually configured forwarding and resource information. Signaling protocols and path calculation are not used during the setup of CR-LSPs. Setting up a static CR-LSP consumes a few resources because the two ends of the CR-LSP do not need to exchange MPLS control packets. The static CR-LSP cannot be adjusted dynamically in a changeable network topology. A static CR-LSP configuration error may cause protocol packets of different NEs and statuses interfere one another, which adversely affects services. To address the preceding problem, a device can be enabled to check source interfaces of static CR-LSPs. With this function configured, the device can only forward packets if both labels and inbound interfaces are correct.
Principles
In Figure 1-2219, static CR-LSP1 is configured, with PE1 functioning as the ingress, the P as a transit node, and PE2 as the egress. The P's inbound interface connected to PE1 is Interface1 and the incoming label is Label1. Static CR-LSP2 remains on PE3 that functions as the ingress of CR-LSP2. The P's inbound interface connected to PE3 is Interface2 and the incoming label is Label1. If PE3 sends traffic along CR-LSP2 and Interface2 on the P receives the traffic, the P checks the inbound interface information and finds that the traffic carries Label1 but the inbound interface is not Interface1. Consequently, the P discards the traffic.
Static Bidirectional Co-routed LSPs
A co-routed bidirectional static CR-LSP is an important feature that enables LSP ping messages, LSP tracert messages, and OAM messages and replies to travel through the same path.
Background
Service packets exchanged by two nodes must travel through the same links and nodes on a transport network without running a routing protocol. Co-routed bidirectional static CR-LSPs can be used to meet the requirements.
Definition
A co-routed bidirectional static CR-LSP is a type of CR-LSP over which two flows are transmitted in opposite directions over the same links. A co-routed bidirectional static CR-LSP is established manually.
A co-routed bidirectional static CR-LSP differs from two LSPs that transmit traffic in opposite directions. Two unidirectional CR-LSPs bound to a co-routed bidirectional static CR-LSP function as a single CR-LSP. Two forwarding tables are used to forward traffic in opposite directions. The co-routed bidirectional static CR-LSP can go Up only when the conditions for forwarding traffic in opposite directions are met. If the conditions for forwarding traffic in one direction are not met, the bidirectional CR-LSP is in the Down state. If no IP forwarding capabilities are enabled on the bidirectional CR-LSP, any intermediate node on the bidirectional LSP can reply with a packet along the original path. The co-routed bidirectional static CR-LSP supports the consistent delay and jitter for packets transmitted in opposite directions, which guarantees QoS for traffic transmitted in opposite directions.
Implementation
A bidirectional co-routed static CR-LSP is manually established. A user manually specifies labels and forwarding entries mapped to two FECs for traffic transmitted in opposite directions. The outgoing label of a local node (also known as an upstream node) is equal to the incoming label of a downstream node of the local node.
A node on a co-routed bidirectional static CR-LSP only has information about the local LSP and cannot obtain information about nodes on the other LSP. A co-routed bidirectional static CR-LSP shown in Figure 1-2220 consists of a CR-LSP and a reverse CR-LSP. The CR-LSP originates from the ingress and terminates on the egress. Its reverse CR-LSP originates from the egress and terminates on the ingress.
- On the ingress, configure a tunnel interface and enable MPLS TE on the outbound interface of the ingress. If the outbound interface is Up and has available bandwidth higher than the bandwidth to be reserved, the associated bidirectional static CR-LSP can go Up, regardless of the existence of transit nodes or the egress node.
- On each transit node, enable MPLS TE on the outbound interface of the bidirectional CR-LSP. If the outbound interface is Up and has available bandwidth higher than the bandwidth to be reserved for the forward and reverse CR-LSPs, the associated bidirectional static CR-LSP can go Up, regardless of the existence of the ingress, other transit nodes, or the egress node.
- On the egress, enable MPLS TE on the inbound interface. If the inbound interface is Up and has available bandwidth higher than the bandwidth to be reserved for the bidirectional CR-LSP, the associated bidirectional static CR-LSP can go Up, regardless of the existence of the ingress node or transit nodes.
Loopback Detection for a Static Bidirectional Co-Routed CR-LSP
On a network with a static bidirectional co-routed CR-LSP used to transmit services, if a few packets are dropped or bit errors occur on links, no alarms indicating link or LSP failures are generated, which poses difficulties in locating the faults. To locate the faults, loopback detection can be enabled for the static bidirectional co-routed CR-LSP.
Loopback detection for a specified static bidirectional co-routed CR-LSP locates faults if a few packets are dropped or bit errors occur on links along the CR-LSP. To implement loopback detection for a specified static bidirectional co-routed CR-LSP, a transit node temporarily connects the forward CR-LSP to the reverse CR-LSP and generates a forwarding entry for the loop so that the transit node can loop all traffic back to the ingress. A professional monitoring device connected to the ingress monitors data packets that the ingress sends and receives and checks whether a fault occurs on the link between the ingress and transit node.
The dichotomy method is used to perform loopback detection by reducing the range of nodes to be monitored before locating a faulty node. For example, in Figure 1-2221, loopback detection is enabled for a static bidirectional co-routed CR-LSP established between PE1 (ingress) and PE2 (egress). The process of using loopback detection to locate a fault is as follows:
Loopback is enabled on P1 to loop data packets back to the ingress. The ingress checks whether the sent packets match the received ones.
If the packets do not match, a fault occurs on the link between PE1 and P1. Loopback detection can then be disabled on P1.
If the packets match, the link between PE1 and P1 is working properly. The fault location continues.
Loopback is disabled on P1 and enabled on P2 to loop data packets back to the ingress. The ingress checks whether the sent packets match the received ones.
If the packets do not match, a fault occurs on the link between P1 and P2. Loopback detection can then be disabled on P2.
If the packets match, a fault occurs on the link between P2 and PE2. Loopback detection can then be disabled on P2.
Loopback detection information is not saved in a configuration file after loopback detection is enabled. A loopback detection-enabled node loops traffic back to the ingress through a temporary loop. Loopback alarms can then be generated to prompt users that loopback detection is performed. After loopback detection finishes, it can be manually or automatically disabled. Loopback detection configuration takes effect only on a main control board. After a master/slave main control board switchover is performed, loopback detection is automatically disabled.
Associated Bidirectional CR-LSPs
Associated bidirectional CR-LSPs provide bandwidth protection for bidirectional services. Bidirectional switching can be performed for associated bidirectional CR-LSPs if faults occur.
Background
Traffic congestion: RSVP-TE tunnels are unidirectional. The ingress forwards services to the egress along an RSVP-TE tunnel. The egress forwards services to the ingress over IP routes. As a result, the services may be congested because IP links do not reserve bandwidth for these services.
Traffic interruptions: Two MPLS TE tunnels in opposite directions are established between the ingress and egress. If a fault occurs on an MPLS TE tunnel, a traffic switchover can only be performed for the faulty tunnel, but not for the reverse tunnel. As a result, traffic is interrupted.
A forward CR-LSP and a reverse CR-LSP between two nodes are established. Each CR-LSP is bound to the ingress of its reverse CR-LSP. The two CR-LSPs then form an associated bidirectional CR-LSP. The associated bidirectional CR-LSP is mainly used to prevent traffic congestion. If a fault occurs on one end, the other end is notified of the fault so that both ends trigger traffic switchovers, which traffic transmission is uninterrupted.
Implementation
MPLS TE Tunnel1 and Tunnel2 are established using RSVP-TE signaling or manually.
The tunnel ID and ingress LSR ID of the reverse CR-LSP are specified on each tunnel interface so that the forward and reverse CR-LSPs are bound to each other. For example, in Figure 1-2222, set the reverse tunnel ID to 200 and ingress LSR ID to 4.4.4.4 on Tunnel1 so the reverse tunnel is bound to Tunnel1.
The ingress LSR ID of the reverse CR-LSP is the same as the egress LSR ID of the forward CR-LSP.
- Penultimate hop popping (PHP) is not supported on associated bidirectional CR-LSPs.
The forward and reverse CR-LSPs can be established over the same path or over different paths. Establishing the forward and reverse CR-LSPs over the same path is recommended to implement the consistent delay time.
Usage Scenario
An associated bidirectional static CR-LSP transmits services and returned OAM packets on MPLS-TP networks.
An associated bidirectional dynamic CRLSP is used on an RSVP-TE network when bit-error-triggered switching is used.
CBTS
Class-of-service based tunnel selection (CBTS) is a method of selecting a TE tunnel. Unlike the traditional method of load-balancing services on TE tunnels, CBTS selects tunnels based on services' priorities so that high quality resources can be provided for services with higher priority. In addition, FRR and HSB can be configured for TE tunnels selected by CBTS. For more information about FRR and HSB, see the section Configuration - MPLS - MPLS TE Configuration - Configuring MPLS TE Manual FRR and Configuration - MPLS - MPLS TE Configuration - Configuring CR-LSP Backup.
Background
Existing networks face a challenge that they may fail to provide exclusive high-quality transmission resources for higher-priority services. This is because the policy for selecting TE tunnels is based on public network routes or VPN routes, which causes a node to select the same tunnels for services with the same destination IP or VPN address but with different priorities.
Traffic classification can be configured on CBTS-capable devices to match incoming services and map traffic of different services to different priorities. A rule can be enforced based on traffic characteristics. For BGP routes, a QoS Policy Propagation Through the Border Gateway Protocol (QPPB) rule can be enforced based on BGP community attributes from the source device of the routes.
Service class attributes can be configured on a tunnel to which services recurse so that the tunnel can transmit services with one or more priorities. Services with specified priorities can only be transmitted on such tunnels instead of being load-balanced by all tunnels to which they may recurse. The default service class attribute can also be configured for tunnels to carry services of non-specified priorities.
Implementation
Figure 1-2223 illustrates CBTS principles. TE tunnels between LSRA and LSRB balance services, including high-priority voice services, medium-priority Ethernet data services, and common data services. The following operations are performed to use different TE tunnels to carry these services:
Service classes EF, AF1+AF2, and default are configured for the three TE tunnels, respectively.
Multi-field classification is configured on the PE to map voice services to EF and map Ethernet services to AF1 or AF2.
Voice services are transmitted along the TE tunnel that is assigned the EF service class, Ethernet services along the TE tunnel that is assigned the AF1+AF2 service class, and other services along the TE tunnel that is assigned the default service class.
The default service class is not a mandatory setting. If it is not configured, mismatching services will be transmitted along a tunnel that is assigned no service class. If every tunnel is configured with a service class, these services will be transmitted along a tunnel that is assigned a service class mapped to the lowest priority.
Application Scenarios
TE tunnels or LDP over TE tunnels functioning as public network tunnels are deployed for load balancing on a PE.
L3VPN, VLL and VPLS services are configured on a PE. Inter-AS VPN services are not supported.
LDP over TE or TE tunnels are established to load-balance services on a P.
- The TE tunnel includes two types: RSVP-TE tunnel and SR-MPLS TE tunnel.
P2MP TE
Point-to-Multipoint (P2MP) Traffic Engineering (TE) is a promising solution to multicast service transmission. It helps carriers provide high TE capabilities and increased reliability on an IP/MPLS backbone network and reduce network operational expenditure (OPEX).
Background
- IP multicast technology: deployed on a live P2P network with software upgraded. This solution reduces upgrade and maintenance costs. However, IP multicast, similar to IP unicast, does not support QoS or traffic planning capabilities and cannot provide high reliability. Multicast applications have high requirements on real-time transmission and reliability. As such, IP multicast cannot meet these requirements.
- Dedicated multicast network: deployed using ATM or SONET/SDH technologies, which provide high reliability and transmission rates. However, the construction of a private network requires a large amount of investment and independent maintenance, resulting in high operation costs.
P2MP TE is such a technology. It combines advantages such as high transmission efficiency of IP multicast packets and MPLS TE end-to-end QoS guarantee, and provides excellent solutions for multicast services on IP/MPLS backbone networks. P2MP TE establishes a tree-shaped tunnel from an ingress to multiple destinations and reserves bandwidth for multicast packets along the tunnel. This provides bandwidth and QoS guarantee for multicast traffic on the tunnel. P2MP TE tunnels support fast reroute (FRR), which provides high reliability for multicast services.
Benefits
- Optimizes your network bandwidth resources utilization.
- Guarantees bandwidth required by multicast services.
- Eliminates the need to use multicast protocols, such as Protocol Independent Multicast (PIM), on backbone core nodes, simplifying network deployment.
Related Concepts
Name |
Description |
Example |
---|---|---|
Ingress |
A node on which the inbound interface of a P2MP TE tunnel is located. The ingress calculates a tunnel path, establishes a tunnel over the path, and pushes MPLS labels into multicast packets. |
PE1 in Figure 1-2224 |
Transit node |
A relay node that processes P2MP TE tunnel signaling and forwards packets. A transit node swaps labels in MPLS packets, and may become a branch point. |
P1 and P3 in Figure 1-2224 |
Egress |
Destination node of a P2MP TE tunnel, which is also called a leaf node. |
PE3, PE4, and PE5 in Figure 1-2224 |
Sub-LSP |
An LSP that originates from an ingress and is destined for a single egress. It is also known as a source-to-leaf (S2L) sub-LSP. A P2MP TE tunnel consists of one or more sub-LSPs. |
PE1–>P3–>P4–>PE4 in Figure 1-2224 |
Bud node |
A node functioning as both an egress of a sub-LSP and a transit node of other sub-LSPs. This node is connected to a device on the user side. NOTE:
A P2MP bud node has low forwarding performance because it has to replicate traffic to directly connected client-side devices. |
PE2 in Figure 1-2224 |
Branch node |
A branch node is a type of transit node. It replicates MPLS packets and then swaps labels. |
P4 in Figure 1-2224 |
P2MP TE Tunnel Establishment
Similar to P2P TE tunnels, P2MP TE tunnels depends on IGP-TE to advertise bandwidth information. However, the path calculation and establishment processes are different from those of P2P TE tunnels. Currently, P2MP TE tunnels cannot be established across IGP domains or ASs.
Path calculation and planning
P2MP TE uses CSPF to calculate a path that originates from the ingress and is destined for each leaf node. The calculated paths form a shortest path tree. P2MP TE supports the explicit path technique. You can plan explicit paths for a single leaf node, some leaf nodes, or all leaf nodes. The explicit path technique facilitates path planning, but causes the following problems:
- Re-merge event: occurs when two sub-LSPs have different inbound interfaces but the same outbound interface on a transit node. Figure 1-2225 shows that a re-merge event occurs on the node shared by two paths. If a P2MP TE tunnel is established over such a path, duplicate MPLS packets are transmitted on the overlapping node.
- Crossover event: occurs when two sub-LSPs have different inbound and outbound interfaces on a transit node. Figure 1-2225 shows that a crossover event occurs on the node shared by the paths. If a P2MP TE tunnel is established over such a path, double bandwidth is consumed on the overlapping node.
The ingress refuses to establish a tunnel in either of the preceding situations. In either case, you need to modify the explicit path of the leaf node.
Path establishment
RSVP is extended to support P2MP TE tunnel establishment. Similar to a P2P TE tunnel, a P2MP TE tunnel is established using Path and Resv messages that carry signaling information. RSVP-TE signaling messages travel from the ingress to leaf nodes along explicit paths, and then return from the leaf nodes to the ingress. In this signaling process, bandwidth is reserved and an LSP is set up. After the ingress receives signaling messages from all leaf nodes, a P2MP TE tunnel is successfully established. Figure 1-2226 demonstrates the process of establishing a P2MP TE tunnel.After receiving the Path message, the leaf node replies with a Resv message along the reverse path. The Resv message carries the label assigned by the downstream node to its upstream node. Different from P2P TE, P2MP TE shares the incoming label on the branch node P. For details, see Figure 1-2227. Table 1-929 describes the process of generating and receiving Resv messages by each node.Table 1-929 P2MP TE tunnel establishment processNode
Event
Processing
Generated Forwarding Entry
PE2
Receives a Path message sent by the P.
Allocates the label LE21 and sends a Resv message to the P.
in-label: LE21; action: POP
PE3
Receives a Path message sent by the P.
Allocates the label LE31 and sends a Resv message to the P.
in-label: LE31; action: POP
P
Receives the Resv message sent by PE2.
Reserves bandwidth for the outbound interface of PE2, assigns the label LE11, and sends a Resv message to PE1.
in-label: LE11; action: SWAP; out-label: LE21
Receives the Resv message sent by PE3.
Reserves bandwidth for the outbound interface of PE3, assigns the label LE11, and sends a Resv message to PE1.
in-label: LE11; action: SWAP; out-label: LE31
PE1
Receives the Resv message sent by the P.
Reserves bandwidth for the outbound interface of the P and notifies of tunnel setup success.
in-label: none; action: PUSH; out-label: LE11
P2MP TE Data Forwarding
P2MP TE data forwarding is similar to IP multicast data forwarding. A branch node replicates MPLS packets and performs label-related operations to ensure that only one copy of MPLS packets is sent on the link shared by sub-LSPs, maximizing bandwidth resource utilization.
In a VPLS over P2MP scenario or an NG MVPN over P2MP scenario, each service is transmitted exclusively along a P2MP tunnel.
Node |
Forwarding Entry |
Forwarding Behavior |
|
---|---|---|---|
Incoming Label |
Outgoing Label |
||
PE1 |
None |
L11 |
Pushes an outgoing label L11 into an IP multicast packet and forwards the packet to P1. |
P1 |
L11 |
L21 |
Swaps the incoming label of the MPLS packet for the outgoing label L21 and forwards the packet to P2. |
P2 (branch node) |
L21 |
LE22 |
Replicates the MPLS packet, swaps the incoming label for an outgoing label in each packet, and forwards each packet to a next hop through the associated outbound interface. |
LE42 |
|||
PE2 (bud node) |
LE22 |
LE32 |
|
None |
|||
PE3 |
LE32 |
None |
Removes the label from the packet so that this MPLS packet becomes an IP multicast packet. |
PE4 |
LE42 |
None |
Removes the label from the packet so that this MPLS packet becomes an IP multicast packet. |
P2MP TE FRR
On the network shown in Figure 1-2229, a P2P TE bypass tunnel is established over the path P1 -> P5 -> P2. It protects traffic on the link between P1 and P2. If the link between P1 and P2 fails, P1 switches the traffic destined for P2 to the bypass tunnel.
You can plan an explicit path for a bypass tunnel and determine whether to plan bandwidth for the bypass tunnel as required.
P2P and P2MP TE tunnels can share a bypass tunnel. FRR protection functions for P2P and P2MP TE tunnels are as follows:
If the planned bandwidth of a bypass tunnel is less than the total bandwidth of a P2P TE tunnel and a P2MP TE tunnel, the bypass tunnel is used by the tunnel that binds it first. If the bandwidth of the bypass tunnel is greater than or equal to the total bandwidth of the P2P TE tunnel and the P2MP TE tunnel, both the P2P TE tunnel and the P2MP TE tunnel can be bound to the same bypass tunnel.
A bypass tunnel with no bandwidth planned can also be bound to both P2P and P2MP TE tunnels.
P2MP TE Soft Preemption
P2MP TE soft preemption enables a sub-LSP with a higher priority to preempt the resources of a sub-LSP with a lower priority. In soft preemption, a soft preemption timer is created for a sub-LSP, and the bandwidth on the preempting node is changed to 0. Then, traffic is switched in make-before-break (MBB) mode, and the original sub-LSP is deleted. If the soft preemption timer expires, the sub-LSP is torn down in hard preemption mode.
As shown in Figure 1-2230, LSP1 is an original P2MP TE sub-LSP, and soft preemption enabled. LSP2 is a new sub-LSP established on PE1 and has a higher priority.
Because bandwidth resources between P1 and P2 are insufficient, P1 sends a PathErr message to notify the ingress of the insufficiency. The ingress then triggers MBB and recommends a new modified sub-LSP, that is, LSP3.
After switching traffic from LSP1 to LSP3, PE1 tears down LSP1. P2MP TE soft preemption is then complete.
MPLS TE Functions Shared by P2P TE and P2MP TE
Supported Feature |
Description |
---|---|
Enables the ingress to reestablish a CR-LSP over a better path. Similar to P2P TE tunnel re-optimization, P2MP TE tunnel re-optimization can be implemented in either of the following modes:
|
|
Authenticates RSVP messages based on RSVP message summary. RSVP authentication helps prevent malicious attacks initiated by modified or forged RSVP messages and improve the network reliability and security. |
|
Reduces the bandwidth consumption. Summary refresh is globally effective function. Therefore, it takes effect on P2P and P2MP TE tunnels when they both exist. |
|
Helps a neighbor complete the GR process. |
Other Functions
P2MP TE tunnels can be used as public network tunnels to transmit NG MVPN and multicast VPLS services.
In both scenarios, a tunnel template must be used to configure attributes for an automatic P2MP TE tunnel. After deployment, the NG MVPN or multicast VPLS network uses these templates to establish P2MP TE tunnels.
Application Scenarios for MPLS TE
P2MP TE Applications for IPTV
Service Overview
- Normally and smoothly forward multicast traffic even during traffic congestion.
- Rapidly detect network faults and switch traffic to a backup link if the primary link fails.
Networking Description
Feature Deployment
Multicast traffic import
Deploy PIM on the P2MP TE tunnel interfaces of the ingress (PE1). Configure multicast static groups to import multicast traffic to P2MP TE tunnels.
P2MP TE tunnel establishment
The following tunnel deployment scheme is recommended:- Path planning: Configuring explicit paths is recommended. Prevent the re-merge and cross-over problems during path planning.
- Resource Reservation Protocol (RSVP) authentication: Configure RSVP neighbor-based authentication to improve the protocol security of the backbone network.
- RSVP Srefresh: Configure RSVP Srefresh to improve the resource utilization of the backbone network.
- P2MP TE FRR: Configure FRR to improve the reliability of the backbone network.
- Multicast traffic forwarding
- Configure PIM on the egresses (PE2 and PE3) to generate multicast forwarding entries. Configure the devices to ignore reverse path forwarding (RPF) check.
- An egress cannot forward a received multicast data message of an any-source multicast (ASM) group if the RPF check result shows that the egress is neither directly connected to the multicast source nor the rendezvous point (RP) of the multicast group. To enable downstream hosts to receive the message in such a case, deploy multicast source proxy, which enables the egress to send a Register message to the RP (for example, SR1) in the PIM domain. The data message can then be forwarded along an RPT.
MPLS TE Configuration
This chapter describes the principles for Multiprotocol Label Switching Traffic Engineering (MPLS TE), Resource Reservation Protocol (RSVP) TE tunnels, RSVP signaling parameter adjustment, RSVP authentication, tunnel parameter adjustment, measures for adjusting TE forwarding, the bandwidth flood threshold, tunnel re-optimization, MPLS TE fast reroute (FRR), MPLS TE Auto FRR, constraints-routed label switched path (CR-LSP) backup, isolated LSP computation, RSVP graceful restart (GR), static bidirectional forwarding detection (BFD) for CR-LSP, dynamic BFD for CR-LSP, MPLS TE distribution, and how to configure MPLS TE, and provides configuration examples.
Overview of MPLS TE
The Multiprotocol Label Switching Traffic Engineering (MPLS TE) technology integrates the MPLS technology with TE. It reserves resources by establishing label switched paths (LSPs) over a specified path in an attempt to prevent network congestion and balance network traffic.
TE
Network congestion is a major cause for backbone network performance deterioration. The network congestion is resulted from insufficient resources or locally induced by incorrect resource allocation. For the former, network device expansion can prevent the problem. For the later, TE is used to allocate some traffic to idle link so that traffic allocation is improved. TE dynamically monitors network traffic and loads on network elements and adjusts the parameters for traffic management, routing, and resource constraints in real time, which prevents network congestion induced by load imbalance.
MPLS TE
MPLS TE establishes constraint-based routed label switched paths (LSPs) and transparently transmits traffic over the LSPs. Based on certain constraints, the LSP path is controllable, and links along the LSP reserve sufficient bandwidth for service traffic. In the case of resource insufficiency, the LSP with a higher priority can preempt the bandwidth of the LSP with a lower priority to meet the requirements of the service with a higher priority. In addition, when an LSP fails or a node on the network is congested, MPLS TE can provide protection through Fast Reroute (FRR) and a backup path. MPLS TE allows network administrators to deploy LSPs to properly allocate network resources and prevent network congestion. As the number of LSPs increases, you can use a dedicated offline tool to analyze traffic.Configuration Precautions for MPLS TE
Feature Requirements
Feature Requirements |
Series |
Models |
---|---|---|
The outbound interface of a P2MP TE tunnel cannot be an channelized sub-interface interface. The outbound interface of an mLDP tunnel cannot be an channelized sub-interface interface. If the outbound interface of a P2MP TE or MPLD tunnel is a channelized sub-interface interface, the bandwidth is not limited by the bandwidth reserved for the channelized sub-interface. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The mpls te multi-protect fast-switch enable command is added to enable TE HSB+FRR fast switching upon dual-point failures. 1. After this command is enabled or disabled, packets may be lost for a short period of time. 2. This command applies to L3VPN over RSVP-TE, VLL over RSVP-TE and VPLS over RSVP-TE scenarios. 3. TE FRR must be deployed on the entire network, HSB must be deployed on the ingress node, BFD for TE LSP must be enabled, and the outbound interface on the P node must be configured to go Down after a delay. Otherwise, fast switching cannot be ensured if dual-point failures occur. 4. If both the primary and HSB LSPs fail, the system detects whether the directly connected interface on the primary LSP fails. If the indirectly connected interface fails, the system continues to use the primary LSP to forward traffic. If FRR is configured, the system uses the FRR protection tunnel to forward traffic. If FRR is not configured, the system discards traffic. 5. This command does not apply to LDP over RSVP-TE and BGP over RSVP-TE scenarios. In these scenarios, enabling this command cannot ensure fast TE LSP switchover in case of dual-point failures, but does not affect basic forwarding functions. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
After CAR and re-marking are configured for multi-field classification in a TE-group scenario, packets may be out of order. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
In a TE-group scenario, if TE tunnels and physical interfaces or LDP LSPs work in hybrid load balancing mode, unequal-cost load balancing cannot be performed based on the bandwidth or weight even if the internal tunnels of each priority are configured with the bandwidth or weight. The internal tunnels of each priority still perform equal-cost load balancing. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When a TE group and dynamic load balancing are both configured, dynamic load balancing (load-balance dynamic-adjust enable) does not take effect. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
VPN QoS is not supported when VLL/VPLS traffic recurses to DS-TE tunnels. When VLL/VPLS traffic recurses to DS-TE tunnels and VPN QoS is configured, traffic is balanced among common TE tunnels. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
In a scenario where a VPN is recursed to a DS-TE tunnel, association between TE faults and VPN FRR is not supported. You are advised to configure VPN-based BFD to directly trigger VPN FRR fast switching. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If penultimate hop popping (PHP) is configured on a device, LSP APS does not take effect. Do not configure PHP and LSP APS at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If OAM (TPOAM or MPLS OAM) is deleted before APS is deleted, APS incorrectly considers that OAM detects a link fault, affecting protection switching. APS and TP-OAM or MPLS OAM are configured. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When the standard DS-TE mode is configured on a device, if the mpls te service-class default command is not run, services that are not associated with priorities select TE tunnels that are not configured with priorities. If such TE tunnels do not exist, services that are not associated with priorities select TE tunnels with the lowest priority. As a result, the TE tunnel bandwidth of the service with the lowest priority is preempted. You are advised to run the mpls te service-class default command in standard DS-TE mode. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
PCEP is used only to connect to a Huawei controller. PCEP and NETCONF cannot be used at the same time to change stitching labels. If both PCEP and NETCONF are used to change stitching labels due to misoperations, run the undo mpls te pce delegate command on the forwarder to undelegate the tunnel and then run the mpls te pce delegate command to delegate the tunnel again. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Entropy label feature: RSVP P2P LSPs and LDP P2P LSPs support entropy label negotiation, but other types of LSPs do not support. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When PCEP interconnects with a Huawei controller, the controller may deliver an initiated-lsp name carrying special characters, such as ? ; %. Such parameters cannot be executed on forwarders. Do not use special characters in initiated-lsp names on the controller. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When tunnel splitting is configured, the split tunnel interface cannot be used as the outbound interface of a route, and the mpls te igp shortcut and mpls te reserved-for-binding commands cannot be used together. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When the neighbor supports GR and the local device functions as the GR Helper, you need to run the mpls rsvp-te hello command on the RSVP-TE-enabled interface after running the mpls rsvp-te hello support-peer-gr command in the MPLS view. If RSVP-TE Hello is not configured on an RSVP-TE interface, the RSVP-TE GR support-peer-gr configuration does not take effect. When an active/standby switchover is performed on an RSVP-TE node, the RSVP adjacency between the node and its neighbor is torn down due to signaling protocol timeout. As a result, the CR-LSP is deleted, and services carried on the CR-LSP are interrupted. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If a TE tunnel traverses multiple IGP areas, an explicit path containing the IP addresses of inter-area nodes must be configured for the tunnel, and CSPF must be enabled on the ingress and inter-area nodes. It is recommended that TE tunnels be planned in the same IGP domain. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
In an LDP over TE scenario, after the mpls te igp advertise or mpls te igp advertise shortcut command is run in the TE tunnel interface view to enable route advertisement, a remote LDP session must be configured for the destination of the tunnel. If no remote LDP session is configured, LDP LSPs cannot be established, causing service interruptions. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When multiple MPLS TE-enabled IGP processes are deployed on a device, CSPF cannot calculate the optimal path among these IGP processes. As a result, the optimal path calculated in a single process may be different from the expected path, and services carried on the TE tunnel may be congested or interrupted. Primary solution: Configure an explicit path in the system view and use the explicit path in the TE tunnel. Alternative solution: Run the mpls te cspf preferred-igp command in the MPLS view to enable CSPF to preferentially select a specified IGP process during path calculation. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Bypass-attribute and detour cannot be configured together. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Reverse LSPs and detour LSPs are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Labels and IP addresses cannot be both configured on an explicit path. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When an affinity attribute is configured for a tunnel, the same affinity attribute cannot be configured as both include and exclude attributes. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
hot-standby and ordinary cannot be configured at the same time. The configuration of the latter overwrites that of the former. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When a GMPLS tunnel is bound to a GMPLS UNI and switch-type is EVPL, process-pst cannot be configured. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Fast-reroute and offload cannot be configured at the same time. Fast-reroute and bypass-tunnel cannot be configured at the same time. Fast-reroute and protected-interface cannot be configured at the same time. Fast-reroute and FF styles cannot be configured at the same time. Fast-reroute and reverse LSPs cannot be configured at the same time. Fast-reroute and reroute disable cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When BFD detects that the process-pst parameter is configured on a GMPLS UNI and the GMPLS tunnel is bound to the GMPLS UNI, switch-type cannot be set to EVPL. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Hot-standby and offload cannot be configured at the same time. Hot-standby and bypass tunnels cannot be configured at the same time. hot-standby and protected-interface cannot be configured at the same time. Hot-standby and FF styles cannot be configured at the same time. hot-standby and reroute disable cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Best-effort and bypass tunnels cannot be configured at the same time. Best-effort and protected-interface cannot be configured at the same time. Best-effort and FF styles cannot be configured at the same time. Best-effort and offload cannot be configured at the same time. Best-effort and ordinary cannot be configured at the same time. Best-effort and reroute disable cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
FF styles and FRR are mutually exclusive. FF styles and backup paths are mutually exclusive. FF styles and automatic bandwidth adjustment are mutually exclusive. FF styles and soft preemption are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Detour and offload cannot be configured at the same time. Detour and bypass-tunnel cannot be configured at the same time. Detour and protected-interface cannot be configured at the same time. Detour and FF styles cannot be configured at the same time. Detour and bypass-attribute cannot be configured at the same time. Detour and reverse LSPs cannot be configured at the same time. Detour and bypass-attribute cannot be configured at the same time. Detour and reroute disable cannot be configured at the same time. Detour and PCE delegation cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
bypass-tunnel and fast-reroute cannot be configured at the same time. bypass-tunnel and detour cannot be configured at the same time. bypass-tunnel and backup cannot be configured at the same time. bypass-tunnel and reroute disable cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Ordinary paths and off-load cannot be configured at the same time. Ordinary paths and best-effort paths cannot be configured at the same time. Ordinary paths and FF styles cannot be configured at the same time. Ordinary paths and reroute disable cannot be configured at the same time. Ordinary paths and the reverse LSPs cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When the reverse LSP feature is used, the global label distribution mode must be set to non-null. Otherwise, the reverse LSP feature does not take effect. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Soft-preemption and FF styles cannot be configured at the same time. Soft-preemption and reroute disable cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The working tunnel cannot be configured as a protection tunnel. The tunnel itself cannot be set as the protection tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Automatic bandwidth adjustment takes effect only after the mpls te timer auto-bandwidth command is run in the MPLS view. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Bit-error-triggered protection switching and protection group features are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
PCE delegation and detour cannot be configured at the same time. PCE delegation and reroute disable cannot be configured together. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Bidirectional co-routed LSPs and dynamic reverse LSPs are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Reroute disable and PCE delegation cannot be configured at the same time. Reroute disable and re-optimization cannot be configured at the same time. Reroute disable and soft preemption cannot be configured at the same time. Reroute disable and detour cannot be configured at the same time. Reroute disable and ordinary cannot be configured at the same time. Reroute disable and best-effort paths cannot be configured at the same time. Reroute disable and FRR cannot be configured at the same time. Reroute disable and backup cannot be configured at the same time. Reroute disable and bypass-tunnel cannot be configured at the same time. Reroute disable and automatic bandwidth adjustment cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The include-all and include-any attributes of an affinity cannot coexist. The latest configuration overrides the previous one. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Reverse LSPs and backup paths are mutually exclusive. Reverse LSPs and FRR are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The bandwidth configured in the bypass attributes cannot be higher than the bandwidth of the primary tunnel, and the priority configured in the bypass attributes cannot be higher than the priority of the primary tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If bidirectional is enabled for a tunnel, the tunnel cannot be bound to a static bidirectional co-routed CR-LSP on the egress. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The FF style and PCE delegation are mutually exclusive. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Protected-interface and fast-reroute cannot be configured at the same time. Protected-interface and detour cannot be configured at the same time. Protected-interface and backup cannot be configured at the same time. Protected-interface and ordinary/best-effort cannot be configured at the same time. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If both static BFD for TE LSP and dynamic BFD for TE LSP are configured, static BFD for TE LSP takes effect. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
BFD and MPLS OAM cannot be configured on the same tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
A static or static CR tunnel cannot be bound to a reverse RSVP-TE LSP. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
A dynamic tunnel cannot be bound to a reverse LSP of a static protocol. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The configuration of disabling automatic bandwidth configuration for global physical interfaces is visible only when the router works in transport mode. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
P2MP TE tunnels cannot be established across IGP domains. If a leaf node on a P2MP TE tunnel is not in the same IGP domain as the ingress, the S2L cannot be created. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If an LSP is established across IGP areas, the LSP cannot be re-optimized. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The SR-MPLS TE tunnel configured using a label explicit path does not respond to messages delivered by other label stacks or perform the make before break operation. The path is not re-optimized, which does not affect the original LSP and traffic. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Detour LSPs cannot be created across domains. Do not use detour FRR in inter-domain scenarios. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
SR-MPLS TE tunnels must be used together with detection features. SR-MPLS TE tunnels do not have signaling protocols. Therefore, BFD for TE LSP must be deployed to detect the connectivity of the primary and backup LSPs so that HSB protection can be implemented between the primary and backup LSPs. If no detection protocol is available, SR-MPLS TE LSPs cannot detect link faults and traffic cannot be switched, causing traffic loss. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The bandwidth of the primary tunnel must be less than or equal to that of the backup tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The reverse LSP of the working tunnel cannot be the same as that of the protection tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The name of a reverse static bidirectional co-routed LSP must be the same as that of the forward static bidirectional co-routed LSP. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Only transit nodes on static bidirectional co-routed LSPs support LSP loopback. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Only the IP address of a loopback interface can be used as the MPLS LSR ID. If the LSR ID is not a loopback address, an IGP floods only the LSR ID but not the loopback address. In this case, path computation may succeed. However, after the TE tunnel is established, the application or protocol packets that depend on this address cannot be forwarded. For example, in a VPN recursion scenario, the next hop of a VPN route is the BGP address (usually the loopback interface address) of the peer end, but this address is not the destination address of the TE tunnel. As a result, VPN services fail to be recursed to the tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Auto bypass tunnels cannot be established across domains because FRR cannot be bound in inter-domain scenarios. Do not use auto FRR in inter-domain scenarios. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When graceful shutdown is performed on the device through which a TE tunnel passes, if TE tunnel path computation is not affected by the bandwidth and TE metric of the device, you are not advised to use the graceful shutdown function to upgrade or restart a board or device. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Currently, only IS-IS responds to graceful shutdown to flood TE attributes on an interface. Therefore, when OSPF routes are used on an interface, you are advised not to use graceful shutdown to upgrade or restart a board or device. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Automatic bandwidth adjustment can be performed only after CSPF is enabled globally. If CSPF is not enabled, path pre-calculation cannot be performed and automatic bandwidth adjustment does not take effect. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The destination address of a tunnel must be the MPLS LSR ID of the destination router. If the destination address of a tunnel is not an LSR ID, path computation may succeed. However, after a TE tunnel is established, applications or protocol packets that depend on the destination address cannot be forwarded. For example, in a VPN recursion scenario, the next hop of a VPN route is the BGP address (usually the loopback interface address) of the peer end, but this address is not the destination address of the TE tunnel. As a result, VPN services fail to be recursed to the tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
If the TPOAM lock function is enabled on an interface, the interface cannot be configured as the outbound or inbound interface of a static tunnel. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Re-optimization can be performed only after CSPF is enabled globally. If CSPF is not enabled, path pre-calculation cannot be performed and re-optimization does not take effect. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When the version of the ingress does not support the rsvp graceful-shutdown command, interworking is not supported. This is because that after graceful shutdown is performed on a transit node, a PathErr message with a rerouting request is sent. When the ingress of the RSVP-TE tunnel does not support the rsvp graceful-shutdown command, the ingress cannot process PathErr messages and deletes the RSVP-TE LSP. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
An FRR switchover is incorrectly triggered due to BFD for RSVP detection. An FRR switchover is incorrectly triggered due to BFD for RSVP detection. Return BFD packets are transmitted through routes. The return path may differ from the GRE tunnel path. If the link over which BFD return packets are transmitted fails, traffic may be incorrectly switched to the TE protection path. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When configuring bandwidth for an RSVP-TE tunnel, ensure that the DS-TE mode and te-class-mapping configurations on each node along the tunnel path are the same. If the DS-TE modes are inconsistent or the te-class-mapping configurations in DS-TE mode are inconsistent, the tunnel with bandwidth may fail to be established. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
When static BFD is configured to monitor SR-MPLS TE services, the service BFD status is Up only when both forward and reverse service forwarding are normal. For static BFD for tunnel, services can be recursed to tunnels only when the BFD status is Up. For static BFD for LSP, if BFD is not configured for an LSP or BFD is configured for an LSP and BFD is Up, services can be recursed to the tunnel. For example, in the topology A-B, there are two links between A and B: link1 and link2. The path of the SR-MPLS TE tunnel from A to B is recursed to link1, and the path of the SR-MPLS TE tunnel from B to A is recursed to link2. If link1 goes Down, the SR-MPLS TE tunnel from A to B fails to forward traffic. The tunnel from B to A fails to forward traffic due to the reverse tunnel failure, and BFD for tunnel goes Down. As a result, services cannot be recursed to the tunnel from B to A. If link1 goes Down, the SR-MPLS TE tunnel from A to B fails to forward traffic. The tunnel from B to A fails to forward traffic due to the reverse tunnel failure, and the BFD status goes Down. As a result, services cannot be recursed to the tunnel from B to A. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
After the show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel tunnelPaths tunnelPath or show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel tunnelPaths command is run on the NSO, information about the tunnel path cannot be displayed. 1. Use the key (tunnel name) of rsvpTeTunnel for query (for example, show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel Tunnel1 tunnelPaths tunnelPath). 2. Configure any affinity attribute. 3. View tunnelPath on the NSO web page. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
After the show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel tunnelPaths tunnelPath or show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel tunnelPaths command is run on the NSO, information about the tunnel path cannot be displayed. 1. Use the key (tunnel name) of rsvpTeTunnel for query (for example, show devices device R22 live-status mpls:mpls mplsTe rsvpTeTunnels rsvpTeTunnel Tunnel1 tunnelPaths tunnelPath). 2. Configure any affinity attribute. 3. View tunnelPath on the NSO web page. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
The RSVP-TE entropy label is encapsulated into the MPLS label. The entropy label can be used to guide traffic forwarding only after the MPLS label is popped out. Forwarding along an RSVP-TE over GRE tunnel is performed based on IP packets. The RSVP entropy label does not take effect, and load balancing is performed based on the GRE tunnel specification. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
By default, the TE CR-LSP status is not affected when the BFD session status is Admin Down or the neighbor status is Admin Down. When the system is restarted, the BFD session needs to be renegotiated. If the BFD session does not go Up, the TE CR-LSP goes Down. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
By default, the P2MP TE status is not affected when the BFD session status is Admin Down or the neighbor status is Admin Down. When the system is restarted, the BFD session needs to be renegotiated. If the BFD session does not go Up, P2MP TE goes Down. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
To deploy both RSVP-TE and GRE tunnels, certain requirements must be met. In a cyclic scenario, a GRE tunnel and an RSVP-TE tunnel have the same source and sink. An RSVP-TE tunnel is established through the GRE tunnel. After the route is advertised, the outbound interface of the GRE tunnel changes to the RSVP-TE tunnel interface. As a result, traffic fails to be forwarded, and RSVP-TE goes Down through protocol or BFD detection. If a loop is formed, an infinite loop occurs during forwarding. You need to prevent this problem during configuration. If an RSVP-TE tunnel is deployed over a GRE tunnel which is deployed over another RSVP-TE tunnel and this RSVP-TE tunnel is then deployed over another GRE tunnel, the number of required encapsulations continuously increases, reducing forwarding efficiency. Therefore, you need to avoid this scenario during configuration. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
In the topology A-B-C, there are two links between A and B: link1 and link2. There is a single link between B and C. A functions as the ingress of the P2MP tunnel, and B and C function as leaf nodes of the P2MP tunnel. An explicit path named link1 is configured on B, and no explicit path is configured on C. If RSVP-TE is not enabled on link1, the P2MP path cannot be established. Because link1 is specified for leaf B, link1 cannot be excluded during P2MP path computation. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Hot-standby, ordinary, or best-effort is not configured for a tunnel, but secondary or best-effort attributes (such as hop limit, affinity, or explicit path) are configured for the tunnel. In this case, when incremental or full synchronization is performed, the secondary or best-effort attributes cannot be synchronized. The secondary or best-effort attribute takes effect only when the hot-standby, ordinary, or best-effort attribute is configured. To resolve the issue, you only need to add the corresponding configuration. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
By default, when the BFD session status is Admin Down or the neighbor status is Admin Down, the TE tunnel status is not affected. When the system is restarted, the BFD session needs to be renegotiated. If the BFD session does not go Up, the TE tunnel goes Down. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
In a multi-VS scenario, when SR-MPLS TE tunnels are configured, stitching labels cannot be delivered by the controller, but can be allocated by forwarders. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
By default, when the BFD session status is Admin Down) or the neighbor status is Admin Down, the RSVP status is not affected. When the system is restarted, the BFD session needs to be renegotiated. If the BFD session does not go Up, RSVP is set to Down. |
NetEngine 8000 F1A |
NetEngine 8000 F1A |
Configuring Static CR-LSP
This section describes how to configure a static CR-LSP. The configuration of a static CR-LSP is simple, and label allocation is performed manually, not by using a signaling protocol to exchange control packets, which consumes a few resources.
Usage Scenario
The configuration of a static CR-LSP is a simple process. Labels are manually allocated, and no signaling protocol or exchange of control packets is needed. The setup of a static CR-LSP consumes a few resources. In addition, neither IGP TE nor CSPF needs to be configured for the static CR-LSP.
The static CR-LSP cannot dynamically adapt to a changing network. Therefore, its application is very limited.
- Ingress: An LSP forwarding entry is configured, and an LSP configured on the ingress is bound to the TE tunnel interface.
- Transit node: An LSP forwarding entry is configured.
- Egress: An LSP forwarding entry is configured.
Enabling MPLS TE
Before you set up a static CR-LSP, enable MPLS TE.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is enabled on the node globally.
Before you enable MPLS TE on each interface, enable MPLS TE globally in the MPLS view.
- Run quit
Return to the system view.
- Run interface interface-type interface-number
The interface view is displayed.
- Run mpls
MPLS is enabled on the interface.
- Run mpls te
MPLS TE is enabled on the interface.
When the MPLS TE is disabled in the interface view, all CR-LSPs on the current interface go Down.
When the MPLS TE is disabled in the MPLS view, the MPLS TE on each interface is disabled, and all CR-LSPs are torn down.
- Run commit
The configurations are committed.
(Optional) Configuring Link Bandwidth
By configuring the link bandwidth, you can constrain the bandwidth of a CR-LSP.
Procedure
- Run system-view
The system view is displayed.
- Run interface interface-type interface-number
The MPLS-TE-enabled interface view is displayed.
- Run mpls te bandwidth max-reservable-bandwidth max-bw-value
The maximum reservable bandwidth of the link is set.
- Run mpls te bandwidth bc0 bc0-bw-value
The BC bandwidth of the link is set.
- The maximum reservable bandwidth of a link cannot be higher than the actual bandwidth of the link. A maximum of 80% of the actual bandwidth of the link is recommended for the maximum reservable bandwidth of the link.
- The BC0 bandwidth cannot be higher than the maximum reservable bandwidth of the link.
- Run commit
The configuration is committed.
Configuring the MPLS TE Tunnel Interface
This section describes how to configure the MPLS TE tunnel interface. You must create a tunnel interface before setting up an MPLS TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
The tunnel interface is created, and the tunnel interface view is displayed.
- To configure the IP address of the tunnel interface, select one of the following commands.
To specify the IP address of the tunnel interface, run ip address ip-address { mask | mask-length } [ sub ]
The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.
To borrow an IP address from another interface, run ip address unnumbered interface interface-type interface-number
An MPLS TE tunnel can be established even if the tunnel interface is assigned no IP address. The tunnel interface must obtain an IP address before forwarding traffic. An MPLS TE tunnel is unidirectional; therefore, its peer address is irrelevant to traffic forwarding. A tunnel interface does not need to be assigned an IP address but uses the ingress LSR ID as its IP address.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunneling protocol.
- Run destination ip-address
The destination address of the tunnel is configured, which is usually the LSR ID of the egress node.
- Run mpls te tunnel-id tunnel-id
The tunnel ID is set.
- Run mpls te signal-protocol cr-static
Static CR-LSP is configured as a signaling protocol of the tunnel.
- Run commit
The configurations are committed.
(Optional) Configuring Global Dynamic Bandwidth Pre-Verification
Global dynamic bandwidth pre-verification enables a device to check dynamic bandwidth usage before a static CR-LSP, or a static bidirectional co-routed LSP is established.
Context
When dynamic services or both static and dynamic services are configured, a device only checks static bandwidth usage when a static CR-LSP or a static bidirectional co-routed LSP is configured. The configuration is successful even if the interface bandwidth is insufficient, and the interface status is Down. To percent such an issue, global dynamic bandwidth pre-verification can be configured. With this function enable, the device can prompt a message indicating that the configuration fails in the preceding situation.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
MPLS is enabled globally, and the MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
Global dynamic bandwidth pre-verification can only be configured after MPLS TE is enabled globally.
- Run mpls te static-cr-lsp bandwidth-check deduct-rsvp-bandwidth
Global dynamic bandwidth pre-verification is enabled.
- Run commit
The configuration is committed.
Configuring the Ingress of the Static CR-LSP
This section describes how to configure the ingress of a static CR-LSP. Before you establish a static CR-LSP, specify the ingress of the CR-LSP.
Procedure
- Run system-view
The system view is displayed.
- Run static-cr-lsp ingress { tunnel-interface tunnel interface-number | tunnel-name } destination destination-address { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label [ bandwidth [ ct0 ] bandwidth ]
The LSR is configured as the ingress of the specified static CR-LSP.
To modify the destination destination-address, nexthop next-hop-address, outgoing-interface interface-type interface-number, or out-label out-label, run the static-cr-lsp ingress command to set a new value. There is no need to run the undo static-cr-lsp ingress command before changing a configured value.
tunnel interface-number specifies the MPLS TE tunnel interface that uses this static CR-LSP. By default, the Bandwidth Constraints value is ct0, and the value of bandwidth is 0. The bandwidth used by the tunnel cannot be higher than the maximum reservable bandwidth of the link.
The next hop or outgoing interface is determined by the route from the ingress to the egress. For the difference between the next hop and outbound interface, see "Static Route Configuration" in HUAWEI NetEngine 8000 F1A series Router Configuration Guide - IP Routing.
If an Ethernet interface is used as an outbound interface, the nexthop next-hop-address parameter must be configured.
- Run commit
The configurations are committed.
(Optional) Configuring the Transit Node of the Static CR-LSP
This section describes how to configure the transit nodes of a static CR-LSP. Before you set up a static CR-LSP, specify the transit nodes of the CR-LSP. This procedure is optional because the CR-LSP may have no transit node.
Context
If the static CR-LSP has only the ingress and egress, configuring a transit node is not needed. If the static CR-LSP has one or more transit nodes, perform the following steps on each transit node:
Procedure
- Run system-view
The system view is displayed.
- Run static-cr-lsp transit lsp-name incoming-interface interface-type interface-number in-label in-label { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label [ ingress-lsrid ingress-lsrid egress-lsrid egress-lsrid tunnel-id tunnel-id ] [ bandwidth [ ct0 ] bandwidth ]
The transit node of the static CR-LSP is configured.
If you need to modify parameters except lsp-name, run the static-cr-lsp transit command without the need for running the undo static-cr-lsp transit command first. This means that these parameters can be updated.
The value of lsp-name on the transit node and the egress node cannot be the same as the existing names on the nodes. There are no other restrictions on the value.
If an Ethernet interface is used as the outbound interface of an LSP, the nexthop next-hop-address parameter must be configured to ensure proper traffic forwarding along the LSP.
- Run commit
The configuration is committed.
Configuring the Egress of the Static CR-LSP
This section describes how to configure the egress of a static CR-LSP. Before you set up a static CR-LSP, specify the egress of the CR-LSP.
Procedure
- Run system-view
The system view is displayed.
- Run static-cr-lsp egress lsp-name incoming-interface interface-type interface-number in-label in-label [ lsrid ingress-lsr-id tunnel-id tunnel-id ]
The LSR is configured as the egress of the specified static CR-LSP.
To modify the incoming-interface interface-type interface-number or in-label in-label-value, run the static-cr-lsp egress command to set a new value. There is no need to run the undo static-cr-lsp egress command before changing a configured value.
- Run commit
The configurations are committed.
(Optional) Configuring a Device to Check the Source Interface of a Static CR-LSP
A device uses the static CR-LSP's source interface check function to check whether the inbound interface of labeled packets is the same as that of a configured static CR-LSP. If the inbound interfaces match, the device forwards the packets. If the inbound interfaces do not match, the device discards the packets.
Context
A static CR-LSP is established using manually configured forwarding and resource information. Signaling protocols and path calculation are not used during the setup of CR-LSPs. Setting up a static CR-LSP consumes few resources because the two ends of the CR-LSP do not need to exchange MPLS control packets. However, the static CR-LSP cannot be adjusted dynamically in a changeable network topology. A static CR-LSP configuration error may cause protocol packets of different NEs and statuses to interfere one another, which adversely affects services. To address the preceding problem, a device can be enabled to check source interfaces of static CR-LSPs. With this function configured, the device can forward packets only when both labels and inbound interfaces are correct.
Verifying the Static CR-LSP Configuration
After the configuration of a static CR-LSP, you can view the static CR-LSP status.
Procedure
- Run the display mpls static-cr-lsp [ lsp-name ] [ verbose ] command to check information about the static CR-LSP.
- Run the display mpls te tunnel [ destination ip-address ] [ lsp-id ingress-lsr-id session-id local-lsp-id ] [ lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check information about the tunnel.
- Run the display mpls te tunnel statistics or display mpls lsp statistics command to check the tunnel statistics.
- Run the display mpls te tunnel-interface command to check information about the tunnel interface on the ingress.
Configuring a Static Bidirectional Co-routed LSP
A static bidirectional co-routed label switched path (LSP) is composed of two static constraint-based routed (CR) LSPs in opposite directions. Multiprotocol Label Switching (MPLS) Traffic Engineering (TE) supports MPLS forwarding in both directions along such an LSP.
Usage Scenario
A static CR-LSP is easy to configure because its labels are manually assigned, and no signaling protocol is used to exchange control packets. The setup of a static CR-LSP consumes only a few resources, and you do not need to configure IGP TE or CSPF for the static CR-LSP. As a static CR-LSP cannot dynamically adapt to network changes.
The value of the outgoing label on each node is the value of the incoming label on its next hop.
The destination address of a static bidirectional co-routed LSP is the destination address specified on the tunnel interface.
Pre-configuration Tasks
Before configuring a static bidirectional co-routed LSP, complete the following tasks:
Configure unicast static routes or an IGP to implement connectivity between LSRs.
Configure an LSR ID for each LSR.
Enable MPLS globally and on interfaces on all LSRs.
Enabling MPLS TE
Before setting up a static bidirectional co-routed LSP, you must enable MPLS TE.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is enabled on the node globally.
To enable MPLS TE on each interface, you must first enable MPLS TE globally in the MPLS view.
- Run quit
Return to the system view.
- Run interface interface-type interface-number
The interface view is displayed.
- Run mpls
MPLS is enabled on the interface.
- Run mpls te
MPLS TE is enabled on the interface.
When the MPLS TE is disabled in the interface view, all CR-LSPs on the current interface go Down.
When the MPLS TE is disabled in the MPLS view, the MPLS TE on each interface is disabled, and all CR-LSPs are deleted.
- Run commit
The configuration is committed.
(Optional) Configuring Link Bandwidth
This section describes how to configure link bandwidth so that nodes can reserve the configured link bandwidth for a CR-LSP to be established.
Context
Plan bandwidths on links before you perform this procedure. The reserved bandwidth must be higher than or equal to the bandwidth required by MPLS TE traffic. Perform the following steps on each node along the CR-LSP to be established:
Procedure
- Run system-view
The system view is displayed.
- Run interface interface-type interface-number
The MPLS-TE-enabled interface view is displayed.
- Run mpls te bandwidth max-reservable-bandwidth max-bw-value
The maximum available link bandwidth is set.
- Run mpls te bandwidth bc0 bc0-bw-value
The BC0 link bandwidth is set.
- Run commit
The configuration is committed.
Configuring a Tunnel Interface on the Ingress
A tunnel interface must be created before an MPLS TE tunnel is established on an ingress.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
- To configure an IP address for the tunnel interface, run either of the following commands:
To assign an IP address to the tunnel interface, run ip address ip-address { mask | mask-length } [ sub ]
The primary IP address must be configured prior to the secondary IP address for the tunnel interface.
- To configure the tunnel interface to borrow the IP address of another interface, run ip address unnumbered interface interface-type interface-number
Although an IP address on a tunnel interface enables an MPLS TE tunnel to forward traffic, the MPLS TE tunnel does not need to be assigned a separate IP address because it is unidirectional. Therefore, a tunnel interface usually borrows a loopback address, which is used as the LSR ID of the ingress.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol.
- Run destination ip-address
The destination address is configured for the tunnel. It is usually the LSR ID of the egress.
Various types of tunnels have different requirements for destination addresses. If a tunnel protocol is changed to MPLS TE, the destination address configured using the destination command is automatically deleted and needs to be reconfigured.
- Run mpls te tunnel-id tunnel-id
The tunnel ID is configured.
- Run mpls te signal-protocol cr-static
Static CR-LSP signaling is configured.
- Run mpls te bidirectional
The bidirectional LSP attribute is configured.
- Run commit
The configuration is committed.
(Optional) Configuring Global Dynamic BandwidthPre-verification
Global dynamic bandwidth pre-verification enables a device to check dynamic bandwidth usage before a static CR-LSP, or a static bidirectional co-routed LSP is established.
Context
When dynamic services or both static and dynamic services are configured, a device only checks static bandwidth usage when a static CR-LSP or a static bidirectional co-routed LSP is configured. The configuration is successful even if the interface bandwidth is insufficient, and the interface status is Down. To percent such an issue, global dynamic bandwidth pre-verification can be configured. With this function enable, the device can prompt a message indicating that the configuration fails in the preceding situation.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
MPLS is enabled globally, and the MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
Global dynamic bandwidth pre-verification can only be configured after MPLS TE is enabled globally.
- Run mpls te static-cr-lsp bandwidth-check deduct-rsvp-bandwidth
Global dynamic bandwidth pre-verification is enabled.
- Run commit
The configuration is committed.
Configuring the Ingress of a Static Bidirectional Co-routed LSP
The ingress of a static bidirectional co-routed LSP needs to be manually specified.
Procedure
- Run system-view
The system view is displayed.
- Run bidirectional static-cr-lsp ingress tunnel-name
A static bidirectional CR-LSP is created, and its ingress view is displayed.
- Run forward { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label-value [ bandwidth ct0 bandwidth | pir pir ] *
A forward CR-LSP is configured on the ingress. The bandwidth parameter specifies the reserved bandwidth for the forward CR-LSP. The bandwidth value cannot be higher than the maximum reservable link bandwidth. If the specified bandwidth is higher than the maximum reservable link bandwidth, the CR-LSP cannot go up.
- Run backward in-label in-label-value [ lsrid ingress-lsr-id tunnel-id ingress-tunnel-id ]
A reverse CR-LSP is specified on the ingress.
- (Optional) Run description text
A description is configured for the static bidirectional co-routed CR-LSP.
- Run commit
The configuration is committed.
(Optional) Configuring a Transit Node of a Static Bidirectional Co-routed LSP
The transit node of a static bidirectional co-routed LSP needs to be manually specified. This configuration is optional because a static bidirectional co-routed LSP may have no transit node.
Context
Skip this procedure if a static bidirectional co-routed LSP has only an ingress and an egress. If a static bidirectional co-routed LSP has a transit node, perform the following steps on this transit node:
Procedure
- Run system-view
The system view is displayed.
- Run bidirectional static-cr-lsp transit lsp-name
A static bidirectional CR-LSP is created, and its transit view is displayed.
The value of lsp-name cannot be the same as an existing LSP name on the device.
- Run forward in-label in-label-value { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label-value [ ingress-lsrid ingress-lsrid egress-lsrid egress-lsrid tunnel-id tunnel-id ] [ bandwidth ct0 bandwidth | pir pir ] *
A forward CR-LSP is configured on the transit node.
- Run backward in-label in-label-value { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label-value [ bandwidth ct0 bandwidth | pir pir ] *
A reverse CR-LSP is configured on the transit node.
- (Optional) Run description text
A description is configured for the static bidirectional co-routed CR-LSP.
- Run commit
The configuration is committed.
Configuring the Egress of a Static Bidirectional Co-routed CR-LSP
The egress of a static bidirectional co-routed CR-LSP needs to be manually specified.
Procedure
- Run system-view
The system view is displayed.
- Run bidirectional static-cr-lsp egress lsp-name
A static bidirectional CR-LSP is created, and its egress view is displayed.
- Run forward in-label in-label-value [ lsrid ingress-lsr-id tunnel-id ingress-tunnel-id ]
A forward CR-LSP is configured on the egress.
If lsrid ingress-lsr-id tunnel-id ingress-tunnel-id is specified in this command, the system checks whether the tunnel destination IP address on the egress and the specified value of ingress-lsr-id are consistent. If the specified value of ingress-lsr-id is different from the tunnel destination IP address on the egress, the tunnel cannot go up. As a result, the forward and reverse CR-LSPs configured on the egress cannot go up.
- Run backward { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label-value [ bandwidth ct0 bandwidth | pir pir ] *
A reverse CR-LSP is configured on the egress.
The bandwidth parameter specifies the reserved bandwidth for a reverse CR-LSP. The bandwidth value cannot be higher than the maximum reservable link bandwidth. If the specified bandwidth is higher than the maximum reservable link bandwidth, the CR-LSP cannot go up.
- (Optional) Run description text
A description is configured for the static bidirectional co-routed CR-LSP.
- Run commit
The configuration is committed.
Configuring the Tunnel Interface on the Egress
The reverse tunnel attribute is configured, and the tunnel interface is bound to a static bidirectional co-routed LSP on the egress.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol to create an MPLS TE tunnel.
- Run destination ip-address
The destination address is configured for the tunnel. It is usually the LSR ID of the ingress.
Various types of tunnels have different requirements for destination addresses. If a tunnel protocol is changed to MPLS TE, the destination address configured using the destination command is automatically deleted and needs to be reconfigured.
- Run mpls te tunnel-id tunnel-id
The tunnel ID is configured.
- Run mpls te signal-protocol cr-static
A static CR-LSP is configured.
- Run mpls te passive-tunnel
The reverse tunnel attribute is configured.
- Run mpls te binding bidirectional static-cr-lsp egress tunnel-name
The tunnel interface is bound to the specified static bidirectional co-routed LSP.
- Run commit
The configuration is committed.
Configuring an Associated Bidirectional CR-LSP
Associated bidirectional CR-LSPs provide bandwidth protection for bidirectional services. Bidirectional switching can be performed for associated bidirectional CR-LSPs if faults occur.
Context
Usage Scenario
- MPLS TE tunnels that transmit services are unidirectional. The ingress forwards services to the egress along an MPLS TE tunnel. The egress forwards services to the ingress over IP routes. As a result, the services may be congested because IP links do not reserve bandwidth for these services.
- Two MPLS TE tunnels in opposite directions are established between the ingress and egress. If a fault occurs on an MPLS TE tunnel, a traffic switchover can only be performed for the faulty tunnel, but not for the reverse tunnel. As a result, traffic is interrupted.
A forward CR-LSP and a reverse CR-LSP between two nodes are established. Each CR-LSP is bound to the ingress of its reverse CR-LSP. The two CR-LSPs then form an associated bidirectional CR-LSP. The associated bidirectional CR-LSP is primarily used to prevent traffic congestion. If a fault occurs on one end, the other end is notified of the fault so that both ends trigger traffic switchovers, which ensures that traffic transmission is uninterrupted.
The configurations in this section must be performed on tunnel interfaces of both the forward and reverse CR-LSPs. Each CR-LSP is bound to the ingress of its reverse CR-LSP.
Pre-configuration Tasks
Before configuring an associated bidirectional CR-LSP, complete either of the following tasks:
Create RSVP-TE tunnels in opposite directions for an associated bidirectional dynamic CR-LSP to be established.
Create static CR-LSPs in opposite directions for an associated bidirectional static CR-LSP to be established.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
- Run mpls te reverse-lsp protocol { rsvp-te ingress-lsr-id ingress-lsr-id tunnel-id tunnel-id | static lsp-name lsp-name }
A reverse CR-LSP is configured on the tunnel interface.
Either of the following associated bidirectional CR-LSPs can be established:
- Associated bidirectional static CR-LSP: Two static unidirectional CR-LSPs in opposite directions are bound to each other to form an associated bidirectional static CR-LSP. For this type of CR-LSP, you need to specify static lsp-name lsp-name.
- Associated bidirectional dynamic CR-LSP: Two RSVP-TE tunnels in opposite directions are bound to each other to form an associated bidirectional dynamic CR-LSP. For this type of CR-LSP, you need to specify rsvp-te ingress-lsr-id ingress-lsr-id tunnel-id tunnel-id .
- Run commit
The configuration is committed.
Configuring CR-LSP Backup
CR-LSP backup is configured to provide end-to-end protection for a CR-LSP.
Usage Scenario
CR-LSP backup provides an end-to-end path protection for an entire CR-LSP.
A backup CR-LSP is established in either of the following modes:
Hot-standby mode: A backup CR-LSP and a primary CR-LSP are created simultaneously.
Ordinary backup mode: A backup CR-LSP is created only after a primary CR-LSP fails.
The paths of backup CR-LSPs in the preceding modes are different:
Hot standby mode: The path of a backup CR-LSP and the path of a primary CR-LSP overlap only if the backup CR-LSP is established over an explicit path.
Ordinary backup mode: The path of a backup CR-LSP and the path of a primary CR-LSP overlap in any cases.
Hot standby supports best-effort paths. If both the primary and backup CR-LSPs fail, a temporary path, called a best-effort path, is established. All traffic switches to this path. In Figure 1-2232, the path of the primary CR-LSP is PE1 -> P1 -> PE2, and the path of the backup CR-LSP is PE1 -> P2 -> PE2. If both the primary and backup CR-LSPs fail, the node triggers the setup of a best-effort path along the path PE1 -> P2 -> P1 -> PE2.
Pre-configuration Tasks
Before configuring CR-LSP backup, complete the following tasks:
Establish a primary RSVP-TE tunnel.
Enable MPLS, MPLS TE, and RSVP-TE in the MPLS and physical interface views on every node along a bypass tunnel. (See Enabling MPLS TE and RSVP-TE.)
(Optional) Configure the link bandwidth for the backup CR-LSP. (See (Optional) Configuring TE Attributes.)
(Optional) Configure an explicit path for the backup CR-LSP. (See (Optional) Configure an explicit path.)
Configuring CR-LSP Backup Parameters
A backup CR-LSP is established in either hot standby or ordinary backup mode. A hot-standby CR-LSP and an ordinary backup CR-LSP cannot be established simultaneously.
Context
CR-LSP backup is disabled by default. After CR-LSP backup is configured on the ingress of a primary CR-LSP, the system automatically selects a path for a backup CR-LSP.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Establish a primary RSVP-TE tunnel.
- Run mpls te backup hot-standby [ mode { revertive [ wtr interval ] | non-revertive } | overlap-path | wtr [ interval ] | dynamic-bandwidth ]
The mode of establishing a backup CR-LSP is configured.
Select the following parameters as needed to enable sub-functions:mode revertive [ wtr interval ]: enables a device to switch traffic back to the primary CR-LSP.
mode non-revertive: disables a device from switching traffic back to the primary CR-LSP.
overlap-path: allows a hot-standby CR-LSP to overlap the primary CR-LSP if no available path is provided for the hot-standby CR-LSP.
wtr interval: sets the time before a traffic switchback is performed.
dynamic-bandwidth: enables a hot-standby CR-LSP to obtain bandwidth resources only after the hot-standby CR-LSP takes over traffic from a faulty primary CR-LSP. This function helps efficiently use network resources and reduce network costs.
The bypass and backup tunnels cannot be configured on the same tunnel interface. The mpls te bypass-tunnel and mpls te backup commands cannot be configured on the same tunnel interface. Also, the mpls te protected-interface and mpls te backup commands cannot be configured on the same tunnel interface.
- (Optional) Run mpls te path explicit-path path-name secondary
An explicit path is specified for the backup CR-LSP.
The mpls te path explicit-path path-name secondary and mpls te backup commands must be configured simultaneously.
- (Optional) Run mpls te affinity property properties [ mask mask-value ] secondary
An affinity is set for the backup CR-LSP.
- (Optional) Run mpls te hop-limit hop-limit-value secondary
A hop limit value is set for the backup CR-LSP.
mpls te hop-limit hop-limit-value secondary and mpls te backup commands must be configured simultaneously.
- (Optional) Run mpls te backup hot-standby overlap-path
The path overlapping function is configured for the hot-standby CR-LSP.
- (Optional) Run mpls te backup hot-standby dynamic-bandwidth
The dynamic bandwidth adjustment function is enabled for the hot-standby CR-LSP.
This function enables a hot-standby CR-LSP to obtain bandwidth resources only after the hot-standby CR-LSP takes over traffic from a faulty primary CR-LSP. This function helps efficiently use network resources and reduce network costs.
- Run commit
The configuration is committed.
(Optional) Configuring a Best-effort Path
A best-effort path is configured to take over traffic if both the primary and backup CR-LSPs fail.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te backup ordinary best-effort
A best-effort path is configured.
- (Optional) Run mpls te affinity property properties [ mask mask-value ] best-effort
The affinity property of the best-effort path is configured.
- (Optional) Run mpls te hop-limit hop-limit-value best-effort
The hop limit is set for the best-effort path.
- Run commit
The configuration is committed.
(Optional) Configuring a Traffic Switching Policy for a Hot-Standby CR-LSP
This section describes how to configure a traffic switching policy for a hot-standby CR-LSP. The traffic switching policy determines whether to switch traffic back to the primary CR-LSP and sets a switchback delay.
Context
When a primary CR-LSP is Down, traffic switches to the hot-standby CR-LSP. When the primary CR-LSP is Up, traffic switches back by default. This configuration provides the flexibility to control the traffic switchover behavior.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te backup hot-standby mode { revertive [ wtr interval ] | non-revertive }
The traffic switching policy is configured for the hot-standby CR-LSP and the switchback delay is set.
- Run commit
The configurations are committed.
(Optional) Configuring a Manual Switching Mechanism for a Primary/Hot-Standby CR-LSP
This section describes how to configure a manual switching mechanism for a primary/hot-standby CR-LSP. This mechanism allows you to control primary/hot-standby CR-LSP switchovers using commands.
Context
When a primary CR-LSP needs to be adjusted, you can switch the traffic to the hot-standby CR-LSP using the hotstandby-switch force command. After the primary CR-LSP has been adjusted, you can switch back the traffic to the primary CR-LSP using the hotstandby-switch clear command.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run hotstandby-switch { force | clear }
A manual switching mechanism for the primary/hot-standby CR-LSP is configured.
If you specify force, the traffic switches temporarily to the hot-standby CR-LSP. If you specify clear, the traffic switches back to the primary CR-LSP.
(Optional) Configuring CSPF Fast Switching
(Optional) Enabling the Coexistence of Rapid FRR Switching and MPLS TE HSB
When FRR and HSB are enabled for MPLS TE tunnels, enabling the coexistence of MPLS TE HSB and rapid FRR switching improves switching performance.
Context
Before enabling the coexistence of rapid FRR switching and MPLS TE HSB, TE FRR must be deployed on the entire network. HSB must be deployed on the ingress, BFD for TE LSP must be enabled, and the delayed down function must be enabled on the outbound interface of the P. Otherwise, rapid switching cannot be performed in case of the double points of failure.
Configuring Static BFD for CR-LSP
By configuring static BFD for CR-LSP, you can monitor a static CR-LSP or an RSVP CR-LSP.
Enabling BFD Globally
BFD must be enabled globally before configurations relevant to BFD are performed.
Setting BFD Parameters on the Ingress
This section describes how to set BFD parameters on the ingress to monitor CR-LSPs using BFD sessions.
Procedure
- Run system-view
The system view is displayed.
- Run bfd session-name bind mpls-te interface interface-type interface-number te-lsp [ backup ] [ one-arm-echo ]
BFD is configured to monitor the primary or backup LSP bound to a TE tunnel.
If the backup parameter is specified, the BFD session is bound to the backup CR-LSP.
- Run discriminator local discr-value
The local discriminator of the BFD session is set.
- Run discriminator remote discr-value
The remote discriminator of the BFD session is set.
This command cannot be run for a one-arm-echo session.
The local discriminator of the local device and the remote discriminator of the remote device are the same, and the remote discriminator of the local device and the local discriminator of the remote device are the same. A discriminator inconsistency causes the BFD session to fail to be established.
- (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is set.
This command cannot be run for a one-arm-echo session.
- Effective local interval at which BFD packets are sent = MAX { Configured local interval at which BFD packets are sent, Configured remote interval at which BFD packets are received }
- Effective local interval at which BFD packets are received = MAX { Configured remote interval at which BFD packets are sent, Configured local interval at which BFD packets are received }
- Effective local detection interval = Effective local interval at which BFD packets are received x Configured remote detection multiplier
For example:
The local interval at which BFD packets are sent is set to 200 ms, the local interval at which BFD packets are received is set to 300 ms, and the local detection multiplier is set to 4.
The remote interval at which BFD packets are sent is set to 100 ms, the remote interval at which BFD packets are received is set to 600 ms, and the remote detection multiplier is set to 5.
Then,
Effective local interval at which BFD packets are sent = MAX { 200 ms, 600 ms } = 600 ms; effective local interval at which BFD packets are received = MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x 5 = 1500 ms
Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300 ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } = 600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
- (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is configured.
For a one-arm-echo session, run the min-echo-rx-interval interval command to configure the minimum interval at which the local device receives BFD packets.
- (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is set.
- Run commit
The configuration is committed.
Setting BFD Parameters on the Egress
This section describes how to set BFD parameters on the egress to monitor CR-LSPs using BFD sessions.
Procedure
- Run system-view
The system view is displayed.
- The IP link, LSP, or TE tunnel can be used as the reverse tunnel to inform the ingress of a fault. If there is a reverse LSP or a TE tunnel, use the reverse LSP or the TE tunnel. If no LSP or TE tunnel is established, use an IP link as a reverse tunnel. If the configured reverse tunnel requires BFD detection, you can configure a pair of BFD sessions for it. Run the following commands as required:
Configure a BFD session to monitor reverse channels.
- For an IP link, run bfd session-name bind peer-ip ip-address [ vpn-instance vpn-name ] [ source-ip ip-address ]
For an LDP LSP, run bfd session-name bind ldp-lsp peer-ip ip-address nexthop ip-address [ interface interface-type interface-number ]
For a CR-LSP, run bfd session-name bind mpls-te interface tunnel interface-number te-lsp [ backup ]
For a TE tunnel, run bfd session-name bind mpls-te interface tunnel interface-number
- Run discriminator local discr-value
The local discriminator of the BFD session is set.
- Run discriminator remote discr-value
The remote discriminator of the BFD session is set.
The local discriminator of the local device and the remote discriminator of the remote device are the same. The remote discriminator of the local device and the local discriminator of the remote device are the same. A discriminator inconsistency causes the BFD session to fail to be established.
- (Optional) Run min-tx-interval tx-interval
The local minimum interval at which BFD packets are sent is configured.
If an IP link is used as a reverse tunnel, this parameter is inapplicable.
- Effective local interval at which BFD packets are sent = MAX { Configured local interval at which BFD packets are sent, Configured remote interval at which BFD packets are received }
- Effective local interval at which BFD packets are received = MAX { Configured remote interval at which BFD packets are sent, Configured local interval at which BFD packets are received }
- Effective local detection interval = Effective local interval at which BFD packets are received x Configured remote detection multiplier
For example:
The local interval at which BFD packets are sent is set to 200 ms, the local interval at which BFD packets are received is set to 300 ms, and the local detection multiplier is set to 4.
The remote interval at which BFD packets are sent is set to 100 ms, the remote interval at which BFD packets are received is set to 600 ms, and the remote detection multiplier is set to 5.
Then,
Effective local interval at which BFD packets are sent = MAX { 200 ms, 600 ms } = 600 ms; effective local interval at which BFD packets are received = MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x 5 = 1500 ms
Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300 ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } = 600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
- (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is set.
- (Optional) Run detect-multiplier multiplier
The BFD detection multiplier is set.
- Run commit
The configurations are committed.
Verifying the Configuration of Static BFD for CR-LSP
After configuring the static BFD for CR-LSP, you can view configurations, such as the status of the BFD sessions of Up.
Procedure
- Run the display bfd session mpls-te interface tunnel-name te-lsp [ verbose ] command to check information about the BFD session on the ingress.
- Run the following commands to view information about the BFD session on the egress.
- Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ] command to check information about all BFD sessions.
- Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ] command to check information about static BFD sessions.
- Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-name ] [ verbose ] command to check information about BFD sessions with reverse IP links.
- Run the display bfd session ldp-lsp peer-ip ip-address [ nexthop nexthop-ip [ interface interface-type interface-number ] ] [ verbose ] command to check information about the BFD sessions with reverse LDP LSPs.
- Run the display bfd session mpls-te interface tunnel-name te-lsp [ verbose ] command to check information about the BFD sessions with reverse CR-LSPs.
- Run the display bfd session mpls-te interface tunnel-name [ verbose ] command to check information about the BFD sessions with reverse TE tunnels.
- Run the following commands to view BFD session statistics.
- Run the display bfd statistics session all [ for-ip | for-lsp | for-te ] command to check statistics about all BFD sessions.
- Run the display bfd statistics session static [ for-ip | for-lsp | for-te ] command to check statistics about static BFD sessions.
- Run the display bfd statistics session peer-ip peer-ip [ vpn-instance vpn-name ] command to check statistics about the BFD sessions with reverse IP links.
- Run the display bfd statistics session ldp-lsp peer-ip peer-ip [ nexthop nexthop-ip [ interface interface-type interface-number ] ] command to check statistics about the BFD sessions with reverse LDP LSPs.
- Run the display bfd statistics session mpls-te interface interface-type interface-number te-lsp command to check statistics about BFD sessions with reverse CR-LSPs.
Configuring Dynamic BFD for CR-LSP
Dynamic BFD for CR-LSP can be configured to monitor an RSVP CR-LSP and protect traffic along a CR-LSP.
Usage Scenario
Compared with static BFD, dynamically creating BFD sessions simplifies configurations and reduces configuration errors.
Currently, dynamic BFD for CR-LSP cannot detect faults in the entire TE tunnel.
BFD for LSP can function properly though the forward path is an LSP and the reverse path is an IP link. The forward and reverse paths must be established over the same link. If a fault occurs, BFD cannot identify the faulty path. Before deploying BFD, ensure that the forward and reverse paths are over the same link so that BFD can correctly identify the faulty path.
Enabling BFD Globally
To configure dynamic BFD for CR-LSP, enable BFD globally on the ingress node and the egress node of a tunnel.
Enabling the Capability of Dynamically Creating BFD Sessions on the Ingress
You can enable the ingress node to dynamically create BFD sessions on a TE tunnel either globally or on a specified tunnel interface.
Context
Perform either of the following operations:
Enable MPLS TE BFD globally if most TE tunnels on the ingress need to dynamically create BFD sessions.
Enable MPLS TE BFD on a tunnel interface if some TE tunnels on the ingress need to dynamically create BFD sessions.
Enabling the Capability of Passively Creating BFD Sessions on the Egress
On a unidirectional LSP, creating a BFD session on the ingress playing the active role triggers the sending of LSP ping request messages to the egress node playing the passive role. Only after the passive role receives the ping packets, a BFD session can be automatically established.
Procedure
- Run system-view
The system view is displayed.
- Run bfd
The BFD view is displayed.
- (Optional) Run passive-session udp-port 3784 peer peer-ip
The destination UDP port number is set for the specified passive BFD session.
- (Optional) Run passive-session detect-multiplier multiplier-value peer peerip-value
The detection multiplier is set for the specified passive BFD session.
- Run mpls-passive
The capability of passively creating BFD sessions is enabled.
After this command is run, a BFD session can be created only after the egress receives an LSP ping request message containing the BFD TLV from the ingress.
- Run commit
The configuration is committed.
(Optional) Adjusting BFD Parameters
BFD parameters are adjusted on the ingress of a tunnel either globally or on a tunnel interface.
Context
Perform either of the following operations:
Adjust global BFD parameters if a majority of TE tunnels on the ingress use the same BFD parameters.
Adjust BFD parameters on an interface if some TE tunnels on the ingress need BFD parameters different from global BFD parameters.
- Effective local interval at which BFD packets are sent = MAX { Configured local interval at which BFD packets are sent, Configured remote interval at which BFD packets are received }
- Effective local interval at which BFD packets are received = MAX { Configured remote interval at which BFD packets are sent, Configured local interval at which BFD packets are received }
- Effective local detection interval = Effective local interval at which BFD packets are received x Configured remote detection multiplier
On the egress of the TE tunnel enabled with the capability of passively creating BFD sessions, the default values of the receiving interval, the sending interval, and the detection multiplier cannot be adjusted. The default values of these three parameters are the configured minimum values on the ingress. Therefore, the BFD detection interval on the ingress and that on the egress of a TE tunnel are as follows:
Effective detection interval on the ingress = Configured interval at which BFD packets are received on the ingress x 3
Effective detection interval on the egress = Configured local interval at which BFD packets are sent on the ingress x Configured detection multiplier on the ingress
Verifying the Configuration of Dynamic BFD for CR-LSP
After configuring dynamic BFD for CR-LSP, you can verify that a CR-LSP is Up and a BFD session is successfully established.
Procedure
- Run the display bfd session dynamic [ verbose ] command to check information about the BFD session on the ingress.
- Run the display bfd session passive-dynamic [ peer-ip peer-ipremote-discriminator discriminator ] [ verbose ] command to check information about the BFD session passively created on the egress.
- Check the BFD statistics.
- Run the display bfd statistics command to check statistics about all BFD sessions.
- Run the display bfd statistics session dynamic command to check statistics about dynamic BFD sessions.
- Run the display mpls bfd session [ protocol rsvp-te | outgoing-interface interface-type interface-number ] [ verbose ] command to check information about the MPLS BFD session.
Configuring an RSVP-TE Tunnel
MPLS TE reserves resources for RSVP-TE tunnels. These tunnels are established along specified paths, not passing through congested nodes, balancing traffic on a network.
Enabling MPLS TE and RSVP-TE
MPLS TE and RSVP-TE must be enabled on each LSR in an MPLS domain before TE functions are configured.
Context
If MPLS TE is disabled in the MPLS view, MPLS TE enabled in the interface view is also disabled. As a result, all CR-LSPs configured on this interface go Down, and all configurations associated with these CR-LSPs are deleted.
If MPLS TE is disabled in the interface view, all CR-LSPs on the interface go Down.
If RSVP-TE is disabled on an LSR, RSVP-TE is also disabled on all interfaces on this LSR.
Procedure
- Run system-view
The system view is displayed.
- Run mpls lsr-id lsr-id
An LSR ID is set for a local node.
When configuring an LSR ID, note the following:Configuring an LSR ID is the prerequisite of all MPLS configurations.
An LSR ID must be manually configured because no default LSR ID is available.
It is recommended that the IP address of a loopback interface on the LSR be used as the LSR ID.
The undo mpls command deletes all MPLS configurations, including the established LDP sessions and LSPs.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
- Run mpls rsvp-te
RSVP-TE is enabled.
- Run quit
Return to the system view.
- Run interface interface-type interface-number
The view of the interface on which the MPLS TE tunnel is established is displayed.
- Run mpls
MPLS is enabled on an interface.
- Run mpls te
MPLS TE is enabled on the interface.
- Run mpls rsvp-te
RSVP-TE is enabled on the interface.
- Run commit
The configuration is committed.
Configuring CSPF
Constrained Shortest Path First (CSPF) is configured to calculate the shortest path destined for a specified node.
Context
CSPF is configured on all nodes along a path to enable the ingress to calculate a complete path.
CSPF calculates only the shortest path to the specified tunnel destination. During path computation, if there are multiple paths with the same weight, the optimal path is selected using the tie-breaking function.
The tie-breaking function is performed in one of the following modes:
Most-fill: The device selects a link with the largest ratio of used bandwidth to maximum reservable bandwidth. This mode enables the device to effectively use bandwidth resources.
Least-fill: The device selects a link with the smallest ratio of used bandwidth to maximum reservable bandwidth. This mode enables the device to evenly use bandwidth resources on links.
Random: The device selects a link at random. This mode allows CR-LSPs to distribute evenly over links, regardless of bandwidth.
The Most-fill and Least-fill modes are only effective when the difference in bandwidth usage between the two links exceeds 10%, such as 50% of link A bandwidth utilization and 45% of link B bandwidth utilization. The value is 5%. At this time, the Most-fill and Least-fill modes do not take effect, and the Random mode is still used.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te cspf
CSPF is enabled on the local node.
- (Optional) Run mpls te cspf preferred-igp { isis [ process-id [ level-1 | level-2 ] ] | ospf [ process-id [ area area-id ] ] }
A preferred IGP is configured. Its process and area or level can also be configured.
- (Optional) Run mpls te cspf multi-instance shortest-path [ preferred-igp { isis | ospf } [ process-id ] ]
CSPF is configured to calculate shortest paths among all IGP processes and areas.
The mpls te cspf multi-instance shortest-path command is mutually exclusive with the mpls te cspf preferred-igp command. If the mpls te cspf multi-instance shortest-path command is run, this command overrides the mpls te cspf preferred-igp command.
- Run mpls te tie-breaking { least-fill | most-fill | random }
A tie-breaking mode for calculating a path for a CR-LSP is specified.
- (Optional) Run mpls te cspf optimize-mode disable
The optimization mode is disabled when CSPF calculates the path.
CSPF provides a method for selecting a path in the MPLS domain. By default, the optimization mode is used for path calculation, and the path calculation is performed from Egress to Ingress. Compared with the common calculation method, the optimization mode has higher efficiency.
The mpls te cspf optimize-mode disable command is used to disable the CSPF optimization mode. After the configuration, the path is calculated from Ingress to Egress.
- Run quit
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te tie-breaking { least-fill | most-fill | random }
The tie-breaking function for calculating a path is configured for the current tunnel.
The tie-breaking mode is configured either the tunnel interface view or MPLS view. If the tie-breaking mode configured in both the tunnel interface and MPLS views, the configuration in the tunnel interface view takes effect.
- Run commit
The configurations are committed.
Configuring IGP TE (OSPF or IS-IS)
After IGP TE is configured on all LSRs in an MPLS domain, a TEDB is generated on each LSR.
Context
Either OSPF TE or IS-IS TE can be used:
If neither OSPF TE nor IS-IS TE is configured, LSRs generate no TE link state advertisement (LSA) or TE Link State PDUs (LSPs) and construct no TEDBs.
TE tunnels cannot be used in inter-area scenarios. In an inter-area scenario, an explicit path can be configured, and the inbound and outbound interfaces of the explicit path must be specified, preventing a failure to establish a TE tunnel.
OSPF TE
OSPF TE uses Opaque Type 10 LSAs to carry TE attributes. The OSPF Opaque capability must be enabled on each LSR. In addition, TE LSAs are generated only when at least one OSPF neighbor is in the Full state.
IS-IS TE
IS-IS TE uses the sub-time-length-value (sub-TLV) in the IS-reachable TLV (22) to carry TE attributes. The IS-IS wide metric attribute must be configured. Its value can be narrow, wide, compatible, or wide-compatible.
(Optional) Configuring TE Attributes for a Link
TE link attributes, including the link bandwidth, administrative group, affinity, and SRLG, can be configured for you to select links for CR-LSP establishment.
Context
TE link attributes are as follows:
Link bandwidth
The link bandwidth attribute can be set to limit the CR-LSP bandwidth.
If no bandwidth is set for a link, the CR-LSP bandwidth may be higher than the maximum reservable link bandwidth. As a result, the CR-LSP cannot be established.
TE metric of the link
The IGP metric or TE metric of a link can be used for path calculation of a TE tunnel. In this manner, the path calculation of the TE tunnel is more independent of the IGP, and the path of the TE tunnel can be controlled more flexibly.
Administrative group and affinity
An affinity determines attributes for links to be used by an MPLS TE tunnel. The affinity property, together with the link administrative group attribute, is used to determine which links a tunnel uses.
An affinity can be set using either a hexadecimal number or a name.- Hexadecimal number: A 32-bit hexadecimal number is set for each affinity and link administrative group attribute, which causes plan and computation difficulties. This is the traditional configuration mode of the NetEngine 8000 F.
- Name: This mode is newly supported by the NetEngine 8000 F. Each bit of the 128-bit administrative group and affinity attribute is named, which simplifies configuration and maintenance. This mode is recommended.
SRLG
A shared risk link group (SRLG) is a set of links which are likely to fail concurrently because they share a physical resource (for example, an optical fiber). In an SRLG, if one link fails, the other links in the SRLG also fail.
An SRLG enhances CR-LSP reliability on an MPLS TE network with CR-LSP hot standby or TE FRR enabled. Two or more links are at the same risk if they share physical resources. For example, links on an interface and its sub-interfaces are in an SRLG. Sub-interfaces share risks with their interface. These sub-interfaces will go down if the interface goes down. If the links of a primary tunnel and a backup or bypass tunnel are in the same SRLG, the links of the backup or bypass tunnel share risks with the links of the primary tunnel. The backup or bypass tunnel will go down if the primary tunnel goes down.
Procedure
- Configure link bandwidth.
The bandwidth value is set on outbound interfaces along links of a TE tunnel that requires sufficient bandwidth.
- Set the TE metric value of a link.
- Configure an affinity and a link administrative group attribute in hexadecimal format.
The modified administrative group takes effect only on LSPs that will be established, not on LSPs that have been established.
After the modified affinity is committed, the system will recalculate a path for the TE tunnel, and the established LSPs in this TE tunnel are affected.
- Name hexadecimal bits of an affinity and a link administrative group attribute.
The modified administrative group takes effect only on LSPs that will be established, not on LSPs that have been established.
After the modified affinity is committed, the system will recalculate a path for the TE tunnel, and the established LSPs in this TE tunnel are affected.
- Configure an SRLG.
On the ingress of a hot-standby CR-LSP or a TE FRR tunnel, perform Steps 1 to 3. On the interface of each SRLG member, perform Step 5 and Step 6.
(Optional) Configuring an Explicit Path
An explicit path is configured on the ingress of an MPLS TE tunnel. It defines the nodes through which the MPLS TE tunnel must pass or the nodes that are excluded from the MPLS TE tunnel.
Context
An explicit path consists of a series of nodes. These nodes are arranged in sequence and form a vector path. An interface IP address on every node is used to identify the node on an explicit path. The loopback IP address of the egress node is usually used as the destination address of an explicit path.
Two adjacent nodes on an explicit path are connected in either of the following modes:
Strict: A hop is directly connected to its next hop.
Loose: Other nodes may exist between a hop and its next hop.
The strict and loose modes are used either separately or simultaneously.
TE tunnels are classified as intra-area tunnels and inter-area tunnels. In this situation, areas indicate OSPF and IS-IS areas, but not an autonomous system (AS) running the Border Gateway Protocol (BGP). OSPF areas are divided based on different area IDs while IS-IS areas are divided based on different levels.
Intra-area tunnel: is a TE tunnel in a single OSPF or IS-IS area. An intra-area tunnel can be established over a strict or loose explicit path.
Inter-area tunnel: is a TE tunnel traversing multiple OSPF or IS-IS areas. An explicit path must be used to establish an inter-area TE tunnel and an ABR or an Autonomous System Boundary Router (ASBR) must be included in the explicit path.
The explicit path in use can be updated.
Procedure
- Run system-view
The system view is displayed.
- Run explicit-path path-name
An explicit path is created and the explicit path view is displayed.
- Run next hop ip-address [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ]
The next-hop address is specified for the explicit path.
The include parameter indicates that the tunnel does pass through a specified node; the exclude parameter indicates that the tunnel does not pass through a specified node.
- (Optional) Run add hop ip-address1 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] { after | before } ip-address2
A node is added to the explicit path.
- (Optional) Run modify hop ip-address1 ip-address2 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ]
The address of a node on an explicit path is changed.
- (Optional) Run delete hop ip-address
A node is excluded from an explicit path.
- (Optional) Run list hop [ ip-address ]
Information about nodes on an explicit path is displayed.
- Run commit
The configurations are committed.
(Optional) Disabling TE LSP Flapping Suppression
TE LSP flapping suppression prevents high CPU usage stemming from TE LSP flapping. This function can be disabled.
Configuring an MPLS TE Tunnel Interface
An MPLS TE tunnel is established and managed on a tunnel interface. Therefore, the tunnel interface must be configured on the ingress of an MPLS TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
- Run either of the following commands to configure the IP address of the tunnel interface:
To assign an IP address to the tunnel interface, run ip address ip-address { mask | mask-length } [ sub ]
The primary IP address must be configured before the secondary IP address can be configured for the tunnel interface.
To configure the tunnel interface to borrow the IP address of another interface, run ip address unnumbered interface interface-type interface-number
An MPLS TE tunnel is unidirectional; therefore, its peer address is irrelevant to traffic forwarding. A tunnel interface uses the local LSR ID as its IP address, without the need to be assigned an IP address.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol.
- Run destination ip-address
The destination address of a tunnel is configured, which is usually the LSR ID of the egress.
Various types of tunnels require specific destination addresses. If a tunnel protocol is changed to MPLS TE from another protocol, a configured destination address is deleted automatically and a new destination address needs to be configured.
- Run mpls te tunnel-id tunnel-id
The tunnel ID is configured.
- (Optional) Run mpls te signalled tunnel-name signalled-tunnel-name
The tunnel name carried in RSVP signaling messages is configured.
Perform this step to fulfill the following purposes:
Facilitate TE tunnel management.
Allow a Huawei device to be connected to a non-Huawei device that uses a tunnel name that differs from the tunnel interface name.
- (Optional) Run mpls te bandwidth ct0 ct0-bw-value
The bandwidth is set for an MPLS TE tunnel.
The bandwidth used by the tunnel cannot be higher than the maximum reservable link bandwidth.
The bandwidth used by a tunnel does not need to be set if only a path needs to be configured for an MPLS TE tunnel.
- (Optional) Run mpls te path explicit-path path-name [ secondary ]
An explicit path is configured for an MPLS TE tunnel.
An explicit path does not need to be configured if only the bandwidth needs to be set for an MPLS TE tunnel.
- (Optional) Run mpls te resv-style { ff | se }
A resource reservation style is configured.
The SE style is used in the make-before-break scenario, and the fixed filter (FF) style is used in a few scenarios.
- (Optional) Run mpls te cspf disable
Constraint shortest path first (CSPF) calculation is disabled when a TE tunnel is being established.
The mpls te cspf disable command is only applicable in the inter-AS VPN Option C scenario. In other scenarios, running this command is not recommended.
- Run commit
The configurations are committed.
(Optional) Configuring Soft Preemption for RSVP-TE Tunnels
The setup and holding priorities and the preemption function are configured to allow TE tunnels to be established preferentially to transmit important services, preventing random resource competition during tunnel establishment.
Context
- Hard preemption: A tunnel with a higher setup priority can preempt resources assigned to a tunnel with a lower holding priority. Consequently, some traffic is dropped on the tunnel with a lower holding priority during the hard preemption process.
- Soft preemption: After a tunnel with a higher setup priority preempts the bandwidth of a tunnel with a lower holding priority, the soft preemption function retains the tunnel with a lower holding priority for a specified period of time. If the ingress finds a better path for this tunnel after the time elapses, the ingress uses the make-before-break (MBB) mechanism to reestablish the tunnel over the new path. If the ingress fails to find a better path after the time elapses, the tunnel goes Down.
Verifying the RSVP-TE Tunnel Configuration
After configuring the RSVP-TE tunnel, you can view statistics about the RSVP-TE tunnel and its status.
Procedure
- Run the display mpls te link-administration bandwidth-allocation [ interface interface-type interface-number ] command to check the allocated link bandwidth information.
- Run the display ospf [ process-id ] mpls-te [ area area-id ] [ self-originated ] command to check OSPF TE information.
- Run either of the following commands to check the IS-IS TE status:
- display isis traffic-eng advertisements [ lsp-id | local ] [ level-1 | level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
- display isis traffic-eng statistics [ process-id | vpn-instance vpn-instance-name ]
- Run the display explicit-path [ [ name ] path-name ] [ verbose ] command to check the configured explicit paths.
- Run the display mpls te cspf destination ip-address [ affinity { properties [ mask mask-value ] | { { include-all | include-any } { pri-in-name-string } &<1-32> | exclude { pri-ex-name-string } &<1-32> } * } | bandwidth ct0 ct0-bandwidth | explicit-path path-name | hop-limit hop-limit-number | metric-type { igp | te } | priority setup-priority | srlg-strict exclude-path-name | tie-breaking { random | most-fill | least-fill } ] * [ hot-standby [ explicit-path hsb-path-name | overlap-path | affinity { hsb-properties [ mask hsb-mask-value ] | { { include-all | include-any } { hsb-in-name-string } &<1-32> | exclude { hsb-ex-name-string } &<1-32> } * } | hop-limit hsb-hop-limit-number | srlg { preferred | strict } ] * ] command to check the path that is calculated using CSPF based on specified conditions.
- Run the display mpls te cspf tedb { all | area area-id | interface ip-address | network-lsa | node [ router-id ] | srlg [ srlg-number ] [ igp-type { isis | ospf } ] | overload-node } command to check information about TEDBs that meet specified conditions and can be used by CSPF to calculate a path.
- Run the display mpls rsvp-te command to check RSVP information.
- Run the display mpls rsvp-te psb-content [ ingress-lsr-id tunnel-id [ lsp-id ] ] command to check information about the RSVP-TE PSB.
- Run the display mpls rsvp-te rsb-content [ ingress-lsr-id tunnel-id lsp-id ] command to check information about the RSVP-TE RSB.
- Run the display mpls rsvp-te established [ interface interface-type interface-number peer-ip-address ] command to check information about the established RSVP LSPs.
- Run the display mpls rsvp-te peer [ interface interface-type interface-number | peer-address ] command to check the RSVP neighbor parameters.
- Run the display mpls rsvp-te reservation [ interface interface-type interface-number peer-ip-address ] command to check information about RSVP resource reservation.
- Run the display mpls rsvp-te request [ interface interface-type interface-number peer-ip-address ] command to check information about RSVP LSP resource reservation requests.
- Run the display mpls rsvp-te sender [ interface interface-type interface-number peer-ip-address ] command to check information about an RSVP transmit end.
- Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-number ] } command to check RSVP-TE statistics.
- Run the display mpls te link-administration admission-control [ interface interface-type interface-number ] command to check tunnels established on the local node.
- Run the display affinity-mapping [ attribute affinity-name ] [ verbose ] command to check information about an affinity name template.
- Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
- Run the display mpls te tunnel statistics or display mpls lsp statistics command to check tunnel statistics.
- Run the display mpls te tunnel-interface command to check information about a tunnel interface on the ingress of a tunnel.
Configuring an Automatic RSVP-TE Tunnel
Automatic RSVP-TE tunnels are generated using the PCE Initiated LSP protocol. Such tunnels do not need to be manually configured.
Usage Scenario
In an SDN solution, a controller can run the PCE Initiated LSP protocol to generate RSVP-TE tunnels, without manual tunnel configuration. Dynamic RSVP-TE signaling adjusts paths of TE tunnels dynamically based on network changes. To implement reliability functions, such as TE FRR and CR-LSP backup, using RSVP-Te to establish MPLS TE tunnels is recommended.
Enabling MPLS TE and RSVP-TE
Enabling MPLS TE and RSVP-TE on each node and interface in an MPLS domain is the prerequisites for all TE features.
Procedure
- Run system-view
The system view is displayed.
- Run mpls lsr-id lsr-id
An LSR ID is set for a local node.
When configuring an LSR ID, note the following:Configuring an LSR ID is the prerequisite of all MPLS configurations.
An LSR ID must be manually configured because no default LSR ID is available.
It is recommended that the IP address of a loopback interface on the LSR be used as the LSR ID.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
- Run mpls rsvp-te
RSVP-TE is enabled.
- Run quit
Return to the system view.
- Run commit
The configuration is committed.
(Optional) Configuring CSPF
CSPF is configured to calculate the shortest path destined for a specified node.
Context
After CSPF is configured, it can be used to calculate paths if a connection between the forwarder and controller is disconnected. If CSPF is not configured and a connection between the forwarder and controller is disconnected, no path can be created because only the controller can calculate paths.
CSPF calculates only the shortest path to the specified tunnel destination. During path computation, if there are multiple paths with the same weight, the optimal path is selected using the tie-breaking function.
The tie-breaking function is performed in one of the following modes:
Most-fill: The device selects a link with the largest ratio of used bandwidth to maximum reservable bandwidth. This mode enables the device to effectively use bandwidth resources.
Least-fill: The device selects a link with the smallest ratio of used bandwidth to maximum reservable bandwidth. This mode enables the device to evenly use bandwidth resources on links.
Random: The device selects a link at random. This mode allows CR-LSPs to distribute evenly over links, regardless of bandwidth.
The Most-fill and Least-fill modes are only effective when the difference in bandwidth usage between the two links exceeds 10%, such as 50% of link A bandwidth utilization and 45% of link B bandwidth utilization. The value is 5%. At this time, the Most-fill and Least-fill modes do not take effect, and the Random mode is still used.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te cspf
CSPF is enabled on the local node.
- (Optional) Run mpls te cspf preferred-igp { isis [ process-id [ level-1 | level-2 ] ] | ospf [ process-id [ area area-id ] ] }
A preferred IGP is configured. Its process and area or level can also be configured.
- (Optional) Run mpls te cspf multi-instance shortest-path [ preferred-igp { isis | ospf } [ process-id ] ]
CSPF is configured to calculate shortest paths among all IGP processes and areas.
The mpls te cspf multi-instance shortest-path command is mutually exclusive with the mpls te cspf preferred-igp command. If the mpls te cspf multi-instance shortest-path command is run, this command overrides the mpls te cspf preferred-igp command.
- Run mpls te tie-breaking { least-fill | most-fill | random }
A tie-breaking mode for calculating a path for a CR-LSP is specified.
- (Optional) Run mpls te cspf optimize-mode disable
The optimization mode is disabled when CSPF calculates the path.
CSPF provides a method for selecting a path in the MPLS domain. By default, the optimization mode is used for path calculation, and the path calculation is performed from Egress to Ingress. Compared with the common calculation method, the optimization mode has higher efficiency.
The mpls te cspf optimize-mode disable command is used to disable the CSPF optimization mode. After the configuration, the path is calculated from Ingress to Egress.
- Run quit
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te tie-breaking { least-fill | most-fill | random }
The tie-breaking function for calculating a path is configured for the current tunnel.
The tie-breaking mode is configured either the tunnel interface view or MPLS view. If the tie-breaking mode configured in both the tunnel interface and MPLS views, the configuration in the tunnel interface view takes effect.
- Run commit
The configurations are committed.
Configuring IGP TE (OSPF or IS-IS)
After IGP TE is configured on all LSRs in an MPLS domain, a TEDB is generated on each LSR.
Context
Either OSPF TE or IS-IS TE can be used:
If neither OSPF TE nor IS-IS TE is configured, LSRs generate no TE link state advertisement (LSA) or TE Link State PDUs (LSPs) and construct no TEDBs.
TE tunnels cannot be used in inter-area scenarios. In an inter-area scenario, an explicit path can be configured, and the inbound and outbound interfaces of the explicit path must be specified, preventing a failure to establish a TE tunnel.
OSPF TE
OSPF TE uses Opaque Type 10 LSAs to carry TE attributes. The OSPF Opaque capability must be enabled on each LSR. In addition, TE LSAs are generated only when at least one OSPF neighbor is in the Full state.
IS-IS TE
IS-IS TE uses the sub-time-length-value (sub-TLV) in the IS-reachable TLV (22) to carry TE attributes. The IS-IS wide metric attribute must be configured. Its value can be narrow, wide, compatible, or wide-compatible.
Configuring the Automatic RSVP-TE Tunnel Capability on a PCC
The PCE Initiated LSP protocol needs to be configured for automatic RSVP-TE tunnels. A controller runs this protocol to deliver tunnel and path information to the ingress on which a forwarder resides. Upon receipt of the information, the ingress automatically establishes an RSVP-TE tunnel.
Context
The PCE Initiated LSP protocol is used to implement the automatic RSVP-TE tunnel function. A PCE client (PCC) (ingress) establishes a PCE link to a PCE server (controller). The controller delivers tunnel and path information to a forwarder configured on the ingress. The ingress uses the information to automatically establish a tunnel and reports LSP status information to the controller along the PCE link.
Procedure
- Run system-view
The system view is displayed.
- Run pce-client
A PCC is configured, and the PCC view is displayed.
- Run capability initiated-lsp
The initiated-LSP capability and RSVP-TE are enabled.
- Run connect-server ip-address
A candidate server is specified for the PCC.
- (Optional) Configure a PCC to delete LSPs whose establishment is triggered by a controller if the PCE fails.
- Run the quit command to return to the PCE client view.
- Run the quit command to return to the system view.
- Run the mpls command to enter the MPLS view.
- Run the mpls te pce cleanup initiated-lsp command to enable a PCC to delete LSPs whose establishment is triggered by a controller if the PCE fails.
- Run commit
The configuration is committed.
Configuring Dynamic BFD For Initiated RSVP-TE LSP
Configuring Dynamic BFD for Initiated RSVP-TE Tunnel
RSVP-TE tunnels that a controller runs the PCE Initiated LSP protocol to establish can only be monitored by dynamic BFD.
(Optional) Enabling Traffic Statistics Collection for Automatic Tunnels
A device can be enabled to collect traffic statistics on RSVP-TE tunnels that are established by the PCE Initiated LSP protocol.
Context
To view traffic information about an automatic tunnel, perform the following steps to enable a device to collect traffic statistics on the automatic tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
MPLS is enabled, and the MPLS view is displayed.
- Run quit
The system view is displayed.
- Run mpls traffic-statistics
MPLS traffic statistics collection is enabled globally, and the traffic statistics view is displayed.
- Run te auto-primary-tunnel pce-initiated-lsp
Traffic statistics collection for automatic tunnels is enabled.
- Run commit
The configuration is committed.
Verifying the Automatic RSVP-TE Tunnel Configuration
After configuring an automatic RSVP-TE tunnel, you can check information about the RSVP-TE tunnel and its status statistics.
Procedure
- Run the following commands to check the IS-IS-related label allocation information:
- display isis traffic-eng advertisements [ { level-1 | level-2 | level-1-2 } | { lsp-id | local } ] * [ process-id | [ vpn-instance vpn-instance-name ] ]
- display isis traffic-eng statistics [ process-id | [ vpn-instance vpn-instance-name ] ]
- Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-name ] [ { incoming-interface | interface | outgoing-interface } interface-type interface-number ] [ verbose ] command to check tunnel information.
- Run the display mpls te tunnel statistics command to view TE tunnel statistics.
- Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check information about a tunnel interface on the ingress.
Adjusting RSVP Signaling Parameters
RSVP-TE supports various signaling parameters, which meet requirements for reliability and network resources, and requirements of MPLS TE advanced functions.
Configuring the RSVP Hello Extension
The RSVP Hello extension rapidly monitors the connectivity of RSVP neighbors.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled on the local node.
- Run mpls rsvp-te hello-lost times
The maximum number of Hello messages that can be discarded is set.
- Run mpls rsvp-te timer hello interval
The interval at which Hello messages are refreshed is set.
If the refresh interval is changed, the modification takes effect after the existing refresh timer expires.
- Run quit
The system view is displayed.
- Run interface interface-type interface-number
The view of an RSVP-enabled interface is displayed.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled on an interface.
The RSVP Hello extension rapidly detects the reachability of RSVP neighbors. For details, see relevant standards.
- Run commit
The configurations are committed.
Configuring an RSVP Timer
An RSVP timer is configured to define the interval at which Path and Resv messages are refreshed, and the timeout multiplier associated with the RSVP blocked state is also configured.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te timer refresh interval
If the refresh interval is modified, the modification takes effect after the existing refresh timer expires. Do not set a long refresh interval or frequently modify a refresh interval.
- Run mpls rsvp-te keep-multiplier number
- Run commit
The configurations are committed.
(Optional) Configuring Reliable RSVP Message Transmission
When BFD is not configured on a network, reliable RSVP message transmission can be configured to increase the success rate of detecting link faults, which minimize long-time traffic loss inducted by link intermittent disconnections.
Configuring RSVP-TE Srefresh
Enabling Summary Refresh (Srefresh) on interfaces connecting two RSVP neighboring nodes reduces the network cost and improves network performance. After Srefresh is enabled, retransmission of Srefresh messages will be automatically enabled on interfaces.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
Srefresh enabled in the MPLS view takes effect on an entire device.
- Run mpls rsvp-te srefresh
Srefresh is enabled.
- (Optional) Run mpls rsvp-te timer retransmission { increment-value incvalue | retransmit-value retvalue } *
The retransmission parameters are set.
- (Optional) Enable the summary refresh (Srefresh) forward compatibility.
When the primary and backup CR-LSPs share the same link, the node on both ends of the link may run different versions. If the upstream nodes run a version earlier than V8 and the downstream node runs V8, Srefresh incompatibility occurs. To address this problem, run the mpls rsvp-te srefresh compatible command on the interface that connects the downstream device to the upstream node to enable Srefresh compatibility.
This command can only be run on a downstream node running V8 when its upstream node runs a version earlier than V8, which ensures that Srefresh can be properly negotiated between the two nodes.
- Run commit
The configuration is committed.
Enabling RSVP-TE Reservation Confirmation
RSVP-TE reservation confirmation configured on the egress of a tunnel verifies that resources are successfully reserved.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te resvconfirm
RSVP-TE reservation confirmation is enabled.
After a node receives a Path message, it initiates reservation confirmation by sending a Resv message carrying an object that requests for reservation confirmation.
Receiving ResvConf messages does not mean that resource reservation is successful. It means that, however, resources are reserved successfully only on the farthest upstream node where this Resv message arrives. These resources may be preempted by other applications later.
- Run commit
The configuration is committed.
Changing the PSB and RSB Timeout Multiplier
The PSB and RSB timeout multiplier defines the maximum number of signaling packets that can be discarded in a weak signaling environment.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te keep-multiplier number
The PSB and RSB timeout multiplier is set.
Set the PSB and RSB timeout multiplier greater than or equal to 5. This setting prevents the PSB and RSB from aging or being deleted if the PSB and RSB fail to refresh when a great number of services are transmitted.
- Run commit
The configuration is committed.
Verifying the Configuration of Adjusting RSVP Signaling Parameters
After adjusting RSVP signaling parameters, you can view the refresh parameters, RSVP reservation confirmation status, RSVP Hello extension status, and RSVP timer parameters.
Procedure
- Run the display mpls rsvp-te command to check RSVP-TE configurations.
- Run the display mpls rsvp-te psb-content [ ingress-lsr-id tunnel-id lsp-id ] command to check RSVP-TE PSB information.
- Run the display mpls rsvp-te rsb-content [ ingress-lsr-id tunnel-id lsp-id ] command to check RSVP-TE RSB information.
- Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-number ] } command to check RSVP-TE statistics.
Configuring Dynamic BFD for RSVP
This section describes how to configure a dynamic BFD session to detect faults in links between RSVP neighbors.
Usage Scenario
BFD for RSVP is used with TE FRR when a Layer 2 device exists on the primary LSP between a PLR and its downstream RSVP neighbor.
The interval at which a neighbor is declared Down is three times as long as the interval at which RSVP Hello messages are sent. This allows devices to detect a fault in an RSVP neighbor in seconds.
If a Layer 2 device exists on a link between RSVP nodes, an RSVP node cannot rapidly detect a link fault, which results in a great loss of data.
BFD rapidly detects faults in links or nodes on which RSVP adjacencies are deployed. If BFD detects a fault, it notifies the RSVP module of the fault and instructs the RSVP module to switch traffic to a bypass tunnel.
Enabling BFD Globally
Enabling BFD for RSVP
You can enable BFD for RSVP either globally or on a specified interface.
Context
Perform either of the following operations:
Enable BFD for RSVP globally if most RSVP interfaces on a node need BFD for RSVP.
Enable BFD for RSVP on an RSVP interface if some RSVP interfaces on a node need BFD for RSVP.
(Optional) Adjusting BFD Parameters
BFD parameters can be adjusted either globally or on a specific RSVP interface after BFD for RSVP is configured.
Context
Perform either of the following operations:
Adjust global BFD parameters if most RSVP interfaces on a node use the same BFD parameters.
Adjust BFD parameters on an RSVP interface if some RSVP interfaces require BFD parameters different from global BFD parameters.
Verifying the Configuration of Dynamic BFD for RSVP
After configuring dynamic BFD for RSVP, you can view the status of a BFD session for RSVP.
Procedure
- Run the display mpls rsvp-te bfd session { all | interface interface-type interface-number | peer ip-address } [ verbose ] command to check information about the BFD for RSVP session.
- Run the display mpls rsvp-te interface [ interface-type interface-number ] command to view BFD for RSVP configurations on a specific interface.
Configuring Self-Ping for RSVP-TE
Self-ping is a connectivity check method for RSVP-TE LSPs.
Context
After an RSVP-TE LSP is established, the system sets the LSP status to up, without waiting for forwarding relationships to be completely established between nodes on the forwarding path. If service traffic is imported to the LSP before all forwarding relationships are established, some early traffic may be lost.
Self-ping can address this issue by checking whether the LSP can forward traffic.
Self-ping can be configured globally or for a specified tunnel. If both are configured, the tunnel-specific configuration takes effect.
Procedure
- Configure self-ping globally.
- Configure self-ping for a specified tunnel.
- (Optional) Block self-ping for a specified tunnel.
If self-ping is enabled globally but this function should not be enabled for a tunnel, you can perform the following steps to block self-ping for the tunnel:
- (Optional) Configure whitelist session-CAR for self-ping.
When the self-ping service suffers a traffic burst, bandwidth may be preempted among self-ping sessions. To resolve this problem, you can configure whitelist session-CAR for self-ping to isolate bandwidth resources by session. If the default parameters of whitelist session-CAR for self-ping do not meet service requirements, you can adjust them as required.
Verifying the Configuration
After configuring self-ping for RSVP-TE, verify the configuration.
- Run the display mpls te tunnel-interface command to check the self-ping configuration. In the command output, the Self-Ping field indicates whether self-ping is enabled, and the Self-Ping Duration field indicates the self-ping duration.
- Run the display cpu-defend whitelist session-car self-ping statistics slot slot-id command to check whitelist session-CAR statistics about self-ping packets on a specified interface board.
To check the statistics in a coming period of time, you can run the reset cpu-defend whitelist session-car self-ping statistics slot slot-id command to clear the existing whitelist session-CAR statistics about self-ping packets first. Then, after the period elapses, run the display cpu-defend whitelist session-car self-ping statistics slot slot-id command. In this case, all the statistics are newly generally, facilitating statistics query.
Cleared whitelist session-CAR statistics cannot be restored. Exercise caution when running the reset command.
Configuring RSVP Authentication
RSVP authentication is configured to protect a node from attacks and improve network security.
Usage Scenario
An unauthorized node attempts to establish an RSVP neighbor relationship with a local node.
A remote node constructs forged RSVP messages to establish an RSVP neighbor relationship with a local node and then initiates attacks to the local node.
RSVP key authentication cannot prevent replay attacks or RSVP message mis-sequence during network congestion. RSVP message mis-sequence causes authentication termination between RSVP neighbors. The handshake function, message window functions, and RSVP key authentication are used to prevent the preceding problems.
CR-LSP flapping may lead to frequent re-establishment of RSVP neighbor relationships. As a result, the handshake function is repeatedly performed and RSVP authentication is prolonged. An RSVP authentication lifetime is set to resolve the preceding problems. If no CR-LSP exists, RSVP neighbors still retain their neighbor relationship until the RSVP authentication lifetime expires.
Configuring an RSVP Authentication Mode
RSVP authentication modes are configured between RSVP neighboring nodes or between the interfaces of RSVP neighboring nodes. The keys on both ends to be authenticated must be the same; otherwise, RSVP authentication fails, and RSVP neighboring nodes discard received packets.
Context
RSVP authentication in the key mode is used to prevent an unauthorized node from establishing an RSVP neighbor relationship with a local node. It can also prevent a remote node from constructing forged packets to establish an RSVP neighbor relationship with the local node.
Local interface-based authentication
Local interface-based authentication is performed between interfaces connecting a point of local Repair (PLR) and a merge point (MP) in an inter-domain MPLS TE FRR scenario.
- Local interface-based authentication is recommended on a network configured with inter-domain MPLS TE FRR.
- Local interface- or neighbor interface-based authentication can be used on a network that is not configured with inter-domain MPLS TE FRR.
Neighbor node-based authentication
Neighbor node-based authentication takes effect on an entire device. It is usually performed between a PLR and an MP based on LSR IDs.
This authentication mode is recommended on a network with non-inter-domain MPLS TE FRR.
Neighbor interface-based authentication
Neighbor interface-based authentication is performed between interfaces connecting two LSRs. For example, neighbor interface-based authentication is performed between interfaces connecting LSRA and LSRB shown in the Figure 1-2233.
Local interface- or neighbor interface-based authentication can be used on a network that is not configured with inter-domain MPLS TE FRR.
Each pair of RSVP neighbors must use the same key; otherwise, RSVP authentication fails, and all received RSVP messages are discarded.
Table1 Rules for RSVP authentication mode selection describes differences between local interface-, neighbor node-, and neighbor address-based authentication modes.
RSVP Key Authentication |
Local Interface-based Authentication |
Neighbor Node-based Authentication |
Neighbor Interface-based Authentication |
---|---|---|---|
Authentication mode |
Local interface-based authentication |
RSVP neighbor-based authentication |
RSVP neighbor interface-based authentication |
Priority |
High |
Medium |
Low |
Applicable environment |
Any network |
Non-inter-area network |
Networks on which MPLS TE FRR is enabled and primary CR-LSPs are in the FRR Inuse state |
Advantages |
N/A |
Simplex configuration |
N/A |
(Optional) Setting RSVP Authentication Lifetime
The RSVP authentication lifetime is set to prevent RSVP authentication from being prolonged when CR-LSP flapping causes frequent reestablishment of RSVP neighbor relationships and repeatedly performed handshake.
(Optional) Configuring the Handshake Function
The handshake function helps RSVP key authentication prevent replay attacks.
Context
If the handshake function is configured between neighbors and the lifetime is configured, the lifetime must be greater than the interval at which RSVP update messages are sent. If the lifetime is smaller than the interval at which RSVP update messages are sent, authentication relationships may be deleted because no RSVP update message is received within the lifetime. As a result, the handshake mechanism is used again when a new update message is received. An RSVP-TE tunnel may be deleted or fail to be established.
(Optional) Configuring the Message Window Function
The message window function prevents RSVP message mis-sequence. RSVP message mis-sequence terminates RSVP authentication between neighboring nodes.
Configuring Whitelist Session-CAR for RSVP-TE
You can configure whitelist session-CAR for RSVP-TE to isolate bandwidth resources by session for RSVP-TE packets. This configuration prevents bandwidth preemption among RSVP-TE sessions in the case of a traffic burst.
Context
When the RSVP-TE service suffers a traffic burst, bandwidth may be preempted among RSVP-TE sessions. To resolve this problem, you can configure whitelist session-CAR for RSVP-TE to isolate bandwidth resources by session. If the default parameters of whitelist session-CAR for RSVP-TE do not meet service requirements, you can adjust them as required.
Procedure
- Run system-view
The system view is displayed.
- Run whitelist session-car ospf disable
Whitelist session-CAR for RSVP-TE is disabled.
- (Optional) Run whitelist session-car rsvp-te { cir cir-value | cbs cbs-value | pir pir-value | pbs pbs-value } *
Parameters of whitelist session-CAR for RSVP-TE are configured.
- Run commit
The configuration is committed.
Verifying the Configuration
After configuring whitelist session-CAR for RSVP-TE, you can verify the configuration.
Run the display cpu-defend whitelist session-car rsvp-te statistics slot slot-id command to check whitelist session-CAR statistics about RSVP-TE packets on a specified interface board.
To check the statistics in a coming period of time, you can run the reset cpu-defend whitelist session-car rsvp-te statistics slot slot-id command to clear the existing whitelist session-CAR statistics about RSVP-TE packets first. Then, after the period elapses, run the display cpu-defend whitelist session-car rsvp-te statistics slot slot-id command. In this case, all the statistics are newly generally, facilitating statistics query.
Cleared whitelist session-CAR statistics cannot be restored. Exercise caution when running the reset command.
Configuring Micro-Isolation Protocol CAR for RSVP-TE
Context
Micro-isolation CAR for RSVP-TE is enabled by default to implement micro-isolation protection for RSVP-TE connection establishment packets. If a device is attacked, messages of one RSVP-TE session may preempt bandwidth of other sessions. Therefore, you are advised to keep this function enabled.
Procedure
- Run system-view
The system view is displayed.
- Run micro-isolation protocol-car rsvp-te { cir cir-value | cbs cbs-value | pir pir-value | pbs pbs-value } *
Micro-isolation CAR parameters are configured for RSVP-TE.
In normal cases, you are advised to use the default values of these parameters. pir-value must be greater than or equal to cir-value, and pbs-value must be greater than or equal to cbs-value.
- (Optional) Run micro-isolation protocol-car rsvp-te disable
Micro-isolation CAR is disabled for RSVP-TE.
Micro-isolation CAR for RSVP-TE is enabled by default. To disable micro-isolation for RSVP-TE packets, run the micro-isolation protocol-car rsvp-te disable command. In normal cases, you are advised to keep micro-isolation CAR enabled for RSVP-TE.
- Run commit
The configuration is committed.
Configuring an RSVP GR Helper
An RSVP GR Helper is configured to allow devices along an RSVP-TE tunnel to retain RSVP sessions during a master/backup switchover.
Usage Scenario
The NetEngine 8000 F can only function as a GR Helper to help a neighbor node to complete RSVP GR. The RSVP GR Helper needs to be configured only after GR is enabled on the neighbor node that supports the RSVP GR Restarter. If a local device is only connected to NetEngine 8000 Fs running the same version with the local device, there is no need to configure the RSVP GR Helper on the local device.
Enabling the RSVP Hello Extension
The RSVP Hello extension is configured on a GR node and its neighbor to rapidly monitor reachability between these RSVP nodes.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled globally.
- Run quit
The system view is displayed.
- Run interface interface-type interface-number
The view of an RSVP-enabled interface is displayed.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled on an interface.
After RSVP Hello extension is enabled globally on a node, enable the RSVP Hello extension on each interface of the node.
- Run commit
The configuration is committed.
Enabling the RSVP GR Support Capability
The RSVP GR support capability helps a node support its neighbors' GR capabilities.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te
RSVP-TE is enabled.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled on the local node.
- Run mpls rsvp-te hello support-peer-gr
The RSVP GR support function is enabled.
- Run commit
The configurations are committed.
(Optional) Configuring a Hello Session Between RSVP GR Nodes
If TE FRR is deployed, a Hello session must be established between a PLR and an MP. A Hello session must be manually established if it cannot be automatically established between RSVP neighboring nodes.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls rsvp-te
RSVP-TE is enabled.
- Run mpls rsvp-te hello
The RSVP Hello extension is enabled on the local node.
- Run mpls rsvp-te hello nodeid-session ip-address
A Hello session is established between two RSVP neighboring nodes.
The ip-address parameter specifies the LSR ID of an RSVP neighboring node.
- Run commit
The configurations are committed.
Configuring the Entropy Label for Tunnels
The entropy label can be used to improve load balancing in traffic forwarding.
Usage Scenario
With the increasing growth of user networks and the extending scope of network services, load-balancing techniques are used to improve bandwidth between nodes. A great number of traffic results in load imbalance on transit nodes. To address the problems, the entropy label capability can be configured to improve load balancing performance.
Pre-configuration Tasks
Before configuring the entropy label for tunnels, Enabling MPLS TE and RSVP-TE.
Configuring an LSR to Deeply Parse IP Packets
This section describes how to enable an LSR to deeply parse IP packets.
Context
After the entropy label function is enabled on the LSR, the LSR uses IP header information to generate an entropy label and adds the label to the packets. The entropy label is used as a key value by a transit node to load-balance traffic. If the length of a data frame carried in a packet exceeds the parsing capability, the LSR fails to parse the IP header or generate an entropy label. Perform the following operations on the LSR:
Enabling the Entropy Label Capability on the Egress of an LSP
The entropy label capability can be configured on the egress of an LSP to load-balance traffic.
Configuring the Entropy Label for Global Tunnels
The entropy label can be configured to global tunnels to improve load balancing performance.
Context
If severe load imbalance occurs, the entropy label can be configured for global tunnels to help transit nodes properly load-balance traffic. The entropy label capability is enabled on the egress for tunnels. An entropy label is configured in the tunnel interface view to confirm the tunnel entropy label requirement, and the ingress sends the requirement to the forwarding plane for processing.
(Optional) Configuring an Entropy Label Capability for a Tunnel in the Tunnel Interface View
The entropy label capability can be configured in the tunnel interface view to improve load balancing performance.
Context
If severe load imbalance occurs, the entropy label can be configured in the tunnel interface view to help transit nodes properly load-balance traffic. The entropy label capability is enabled on the egress for tunnels. An entropy label is set on the ingress to confirm the tunnel entropy label requirement, and the ingress sends the requirement to the forwarding plane for processing. If no entropy label is configured in the tunnel interface view, the entropy label capability is determined by the global entropy label capability.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is globally enabled.
- Run quit
The system view is displayed.
- Run interface tunnel tunnel-number
The tunnel interface view is displayed.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol.
- Run mpls te entropy-label
An entropy label is configured for a tunnel in the tunnel interface view.
- Run commit
The configuration is committed.
Configuring the IP-Prefix Tunnel Function
The IP-prefix tunnel function can be configured and used to establish P2P RSVP-TE tunnels in a batch, which simplifies the configuration.
Usage Scenario
When you want to create a large number of P2P RSVP-TE tunnels or create P2P RSVP-TE tunnels to form a full-mesh network, creating them one by one is laborious and complex. To simplify MPLS RSVP-TE tunnel configuration, configure the IP-prefix tunnel function so that P2P RSVP-TE tunnels can be established in a batch.
The full-mesh network indicates that a P2P RSVP-TE tunnel is established between any two nodes on a network.
Configuring an IP Prefix List
An IP prefix list can be configured to define destination IP addresses used in the ip-prefix tunnel function.
Procedure
- Run system-view
The system view is displayed.
- Run ip ip-prefix ip-prefix-name [ index index-number ] { permit | deny } ip-address mask-length [ greater-equal greater-equal-value ] [ less-equal less-equal-value ]
An IPv4 prefix list is configured.
The ip-prefix-name parameter specifies destination IP addresses that can or cannot be used to establish a P2P TE tunnels.
- Run commit
The configuration is committed.
(Optional) Configuring a P2P TE Tunnel Template
A P2P TE tunnel template can be configured, and MPLS TE tunnel attributes can be set in the template.
Usage Scenario
Before you create P2P TE tunnels in a batch, create a P2P TE tunnel template and configure parameters, such as bandwidth and a path limit, in the template. The mpls te auto-primary-tunnel command can then be run to reference this template. The device automatically uses the parameters configured in the P2P TE tunnel template to create P2P TE tunnels in a batch.
Procedure
- Run system-view
The system view is displayed.
- Run mpls te p2p-template template-name
A P2P TE tunnel template is created, and the P2P TE tunnel template view is displayed.
- Select one or more operations.Table 1-934 Operations
Operation
Description
Run the bandwidth ct0 bw-value command to configure the CT0 bandwidth for MPLS TE tunnels.
Before bandwidth protection is provided for traffic transmitted along a P2P TE tunnel, run the bandwidth command to set the CT0 bandwidth for the tunnel. Nodes on the P2P TE tunnel can then reserve bandwidth for services, which implements bandwidth protection.
Run the record-route [ label ] command to enable the route and label record for MPLS TE tunnels.
This step enables nodes along a P2P TE tunnel to use RSVP messages to record detailed P2P TE tunnel information, including the IP address of each hop. The label parameter in the record-route command enables RSVP messages to record label values.
Run the resv-style { se | ff } command to specify the resource reservation style for MPLS TE tunnels.
-
Run the path metric-type { igp | te } command to specify the link metric type for MPLS TE tunnels.
-
Run the affinity property properties [ mask mask-value ] command to configure an affinity constraint for MPLS TE tunnels.
An affinity is a 32-bit vector value used to describe an MPLS link. An affinity and an administrative group attribute define the nodes through which an MPLS TE tunnel passes. Affinity masks determine the link properties that should be checked by a device. If some bits in the mask are 1, at least one bit in an administrative group is 1, and the corresponding bit in the affinity must be 1. If the bits in the affinity are 0s, the corresponding bits in the administrative group cannot be 1.
You can use an affinity to control the nodes through which a P2P TE tunnel passes.
Run the affinity primary { include-all | include-any | exclude } bit-name &<1-32> command to configure an affinity for an MPLS TE tunnel.
Before this command is run, run the path-constraint affinity-mapping command in the system view to create an affinity name template. In addition, run the attribute affinity-name bit-sequence bit-number command to configure the mappings between affinity bit values and names in the template view.
Run the hop-limit hop-limit-value command to set the maximum number of hops on an MPLS TE tunnel.
Run this command to set the maximum number of hops that each CR-LSP in an MPLS TE tunnel supports.
Run the tie-breaking { least-fill | most-fill | random } command to configure a rule for selecting a route among multiple routes to the same destination.
-
Run the priority setup-priority [ hold-priority ] command to set the setup and holding priority values for MPLS TE tunnels.
The setup priority of a tunnel must be no higher than its holding priority. To be specific, a setup priority value must be greater than or equal to a holding priority value.
If resources are insufficient, setting the setup and holding priority values helps a device release LSPs with lower priorities and use the released resources to establish LSPs with higher priorities.
Run the reoptimization [ frequency interval ] command to enable the periodic re-optimization for MPLS TE tunnels.
Periodic re-optimization allows an MPLS TE tunnel to be automatically reestablished over a path with a lower cost. After a path with a lower cost to the same destination has been calculated for a specific reason, such as a cost change, a TE tunnel will be automatically reestablished, optimizing resources on a network.
Run the bfd enable command to enable BFD for CR-LSP.
To rapidly detect LSP faults and improve network reliability, configuring BFD for CR-LSP is recommended.
Run the bfd { min-tx-interval tx-interval | min-rx-interval rx-interval | detect-multiplier multiplier } * command to configure BFD for CR-LSP parameters.
BFD parameters can be set to control BFD detection sensitivity.
Run the fast-reroute [ bandwidth ] command to enable TE FRR.
TE FRR is recommended for primary MPLS TE tunnels, which improves network reliability.
Run the bypass-attributes { bandwidth bandwidth | priority setup-priority [ hold-priority ] } * command to configure bypass tunnel attributes.
A bypass tunnel is established using the configured bypass tunnel attributes. The bypass tunnel bandwidth cannot exceed the primary tunnel bandwidth. The setup priority of a bypass tunnel must be lower than or equal to the holding priority. Both of them must be lower than or equal to those of the primary tunnel.
Run the lsp-tp outbound command to enable traffic policing for MPLS TE tunnels.
Physical links over which a TE tunnel is established may also transmit traffic of other TE tunnels, non-CR-LSP traffic, or even IP traffic, in addition to the TE tunnel traffic. To limit TE traffic within a configured bandwidth range, enable traffic policing for a specific MPLS TE tunnel.
- Run commit
The configuration is committed.
Using the Automatic Primary Tunnel Function to Establish P2P TE Tunnels in a Batch
The automatic primary tunnel function can be used to establish P2P TE tunnels in a batch.
Context
The automatic primary tunnel function uses a specified IP prefix list in which destination IP addresses are defined so that tunnels to the destination IP addresses can be established in a batch. The automatic primary tunnel function can also use a specified tunnel template that defines public attributes before creating tunnels in a batch.
Procedure
- Run system-view
The system view is displayed.
- Run mpls te auto-primary-tunnel ip-prefix ip-prefix [ p2p-template template-name ]
The automatic primary tunnel function is configured.
- (Optional) Set the hold time for tunnels.
- Run commit
The configuration is committed.
Follow-up Procedure
If errors occur in services transmitted on a TE tunnel and the services cannot be restored, run the reset mpls te auto-primary-tunnel command in the user view to reestablish the TE tunnel to restore the services.
After this command is run, all LSPs in the specified tunnel are torn down and reestablished. If some LSPs are transmitting traffic, the operation causes a traffic interruption. Exercise caution when using this command.
Verifying the IP-Prefix Tunnel Function Configuration
After configuring the IP-prefix tunnel function, verify the tunnel information, including the tunnel status, a P2P TE tunnel template used to establish tunnels, and IP prefix list.
Operations
Operation |
Command |
---|---|
Check the configuration of a P2P TE tunnel template. |
display mpls te p2p-template |
Check whether P2P TE tunnels are successfully established using the automatic primary tunnel function. |
display mpls te tunnel |
Check detailed information about the automatic primary tunnel function. |
display mpls te tunnel-interface [ auto-primary-tunnel [ tunnel-name ] ] |
Configuring Dynamic Bandwidth Reservation
This section describes how to configure dynamic bandwidth reservation on an MPLS TE interface. This configuration enables the MPLS TE interface to dynamically reserve bandwidth for an MPLS TE tunnel to account for the fact that physical bandwidth is variable.
Usage Scenario
The reservable bandwidth values configured on the interfaces along an MPLS TE tunnel are used by the MPLS TE module to check whether a link meets all tunnel bandwidth requirements. If a fixed bandwidth value is configured on an interface and the physical bandwidth of the interface changes, the MPLS TE module cannot correctly evaluate link bandwidth resources when the actual reservable bandwidth differs from the configured bandwidth value. For example, the actual physical bandwidth of a trunk interface on an MPLS TE tunnel is 1 Gbit/s. The maximum reservable bandwidth is set to 800 Mbit/s, and the BC0 bandwidth is set to 600 Mbit/s for the interface. If a member of the trunk interface fails, the trunk interface has its physical bandwidth reduced to 500 Mbit/s, which does not meet the requirements for the maximum reservable bandwidth and BC0 bandwidth. However, the MPLS TE module still attempts to reserve the bandwidth as configured. As a result, bandwidth reservation fails.
To address this issue, you can configure the maximum reservable dynamic bandwidth and BC dynamic bandwidth. The former is the proportion of the maximum reservable bandwidth to the actual physical bandwidth, and the latter is the proportion of the BC bandwidth to the maximum reservable bandwidth. Based on the two proportions, the MPLS TE module can quickly detect physical bandwidth changes along links and preempt the bandwidth of any MPLS TE tunnel that requires more than the available interface bandwidth. If soft preemption is supported by the preempted tunnel, traffic on the tunnel can be smoothly switched to another links with sufficient bandwidth. The smooth traffic switchover is also performed when an interface fails, which minimizes traffic loss.
Pre-configuration Tasks
Before configuring dynamic bandwidth reservation, enable MPLS TE on the interface.
Procedure
- Run system-view
The system view is displayed.
- Run interface interface-type interface-number
The view of an MPLS TE-enabled interface is displayed.
- Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-value
The maximum reservable dynamic bandwidth is configured for the link.
If this command is run in the same interface view as the mpls te bandwidth max-reservable-bandwidth command, the latest configuration overrides the previous one.
- (Optional) Run mpls te bandwidth max-reservable-bandwidth dynamic baseline remain-bandwidth
The device is configured to use the remaining bandwidth of the interface when calculating the maximum reservable dynamic bandwidth for TE.
In scenarios such as channelized sub-interface and bandwidth lease, the remaining bandwidth of an interface changes, but the physical bandwidth does not. In this case, the actual forwarding capability of the interface decreases. If the maximum reservable dynamic bandwidth of the TE tunnel is still calculated based on the physical bandwidth, the calculated TE bandwidth is greater than the actual bandwidth, and the actual forwarding capability of the interface does not meet the bandwidth requirement of the tunnel.
- Run mpls te bandwidth dynamic bc0 bc0-bw-percentage
The BC0 dynamic bandwidth is configured for the link.
If this command is run in the same interface view as the mpls te bandwidth bc0 command, the latest configuration overrides the previous one.
- Run commit
The configuration is committed.
Adjusting Parameters for Establishing an MPLS TE Tunnel
Multiple attributes are used to establish MPLS TE tunnels flexibly.
Pre-configuration Tasks
Before adjusting parameters for establishing an MPLS TE tunnel, configure an RSVP-TE tunnel.
Configuring an MPLS TE Explicit Path
An explicit path is configured on the ingress of an MPLS TE tunnel to define the nodes through which the MPLS TE tunnel passes and the nodes that are excluded from the MPLS TE tunnel.
Context
An explicit path consists of a series of nodes. These nodes are arranged in sequence and form a vector path. An IP address for an explicit path is an interface IP address on every node. The loopback IP address of the egress node is used as the destination address of an explicit path.
Two adjacent nodes on an explicit path are connected in either of the following modes:
Strict: A hop is directly connected to its next hop.
Loose: Other nodes may exist between a hop and its next hop.
The strict and loose modes can be used simultaneously.
TE tunnels are classified into the following types:
Intra-area tunnel: A TE tunnel is in a single OSPF or IS-IS area, but not in an autonomous system (AS) running the Border Gateway Protocol (BGP).
Inter-area tunnel: A TE tunnel traverses multiple OSPF or IS-IS areas, but not BGP ASs.
A strict explicit path is used to establish an inter-area TE tunnel, on which a next hop can only be an area border router (ABR) or an autonomous system boundary router (ASBR).
Procedure
- Run system-view
The system view is displayed.
- Run explicit-path path-name
An explicit path is created and the explicit path view is displayed.
- Run next hop ip-address [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ]
The next-hop address is specified for the explicit path.
The include parameter indicates that the tunnel does pass through a specified node; the exclude parameter indicates that the tunnel does not pass through a specified node.
- (Optional) Run add hop ip-address1 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ] { after | before } ip-address2
A node is added to the explicit path.
- (Optional) Run modify hop ip-address1 ip-address2 [ include [ [ strict | loose ] | [ incoming | outgoing ] ] * | exclude ]
The address of a node on an explicit path is changed.
- (Optional) Run delete hop ip-address
A node is excluded from an explicit path.
- (Optional) Run list hop [ ip-address ]
Information about nodes on an explicit path is displayed.
- Run commit
The configurations are committed.
Setting Priority Values for an MPLS TE Tunnel
The priority values are set on the ingress of an MPLS TE tunnel. Preemption is performed based on the setup and holding priorities during the establishment of an MPLS TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te priority setup-priority [ hold-priority ]
The priority values are set for the MPLS TE tunnel.
Both the setup and holding priority values range from 0 to 7. The smaller the value, the higher the priority.
The setup priority value must be greater than or equal to the holding priority value. This means the setup priority is lower than or equal to the holding priority.
- Run commit
The configuration is committed.
Setting the Hop Limit for a CR-LSP
The hop limit set on an ingress is the maximum number of hops on a path along which a CR-LSP is to be established. The hop limit is a constraint used during path selection.
Associating CR-LSP Establishment with the Overload Setting
CR-LSP establishment can be associated with the overload setting. This association ensures that CR-LSPs are established over paths excluding overloaded nodes.
Context
When the node is transmitting a large number of services and its system resources are exhausted, the node marks itself overloaded.
When the node is transmitting a large number of services and its CPU is overburdened, an administrator can run the set-overload command to mark the node overloaded.
If there are overloaded nodes on an MPLS TE network, associate CR-LSP establishment with the IS-IS overload setting to ensure that CR-LSPs are established over paths excluding overloaded nodes. This configuration prevents overloaded nodes from being further burdened and improves CR-LSP reliability.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te path-selection overload
CR-LSP establishment is associated with the IS-IS overload setting. This association allows CSPF to calculate paths excluding overloaded IS-IS nodes.
Before the association is configured, the mpls te cspf command must be run to enable CSPF and the mpls te record-route command must be run to enable the route and label record.
Traffic travels through an existing CR-LSP before a new CR-LSP is established. After the new CR-LSP is established, traffic switches to the new CR-LSP and the original CR-LSP is deleted. This traffic switchover is performed based on the make-before-break mechanism. Traffic is not dropped during the switchover.
The mpls te path-selection overload command has the following influences on the CR-LSP establishment:CSPF recalculates paths excluding overloaded nodes for established CR-LSPs.
CSPF calculates paths excluding overloaded nodes for new CR-LSPs.
This command does not take effect on bypass tunnels or P2MP TE tunnels.
If the ingress or egress is marked overloaded, the mpls te path-selection overload command does not take effect. This means that the established CR-LSPs associated with the ingress or egress will not be reestablished and new CR-LSPs associated with the ingress or egress will also not be established.
- Run commit
The configuration is committed.
Configuring Route and Label Record
An ingress can be configured to allow routes and labels to be recorded along a path over which an RSVP-TE CR-LSP will be established.
Setting Switching and Deletion Delays
The switching and deletion delays are set to ensure that a CR-LSP is torn down only after a new CR-LSP has been established, which prevents traffic interruption.
Context
MPLS TE uses a make-before-break mechanism. If attributes of an MPLS TE tunnel, such as bandwidth or path change, a new CR-LSP with new attributes is established. Such a CR-LSP is called a modified CR-LSP. The new CR-LSP must be established before the original CR-LSP, also called the primary CR-LSP, is torn down. This prevents data loss and additional bandwidth consumption during traffic switching.
If a forwarding entry associated with the new CR-LSP does not take effect after the original CR-LSP has been torn down, a temporary traffic interruption occurs.
The switching and deletion delays can be set on the ingress of the CR-LSP to prevent the preceding problem.
Importing Traffic to an MPLS TE Tunnel
Before importing traffic to an MPLS TE tunnel, familiarize yourself with the usage scenario, complete the pre-configuration tasks for the configuration.
Usage Scenario
Methods to Import Traffic to an MPLS TE Tunnel |
Principles |
Usage Scenario |
Related Configuration Links |
---|---|---|---|
Use static routes |
This is the simplest method for importing the traffic to an MPLS TE tunnel. You only need to configure a static route with a TE tunnel interface as the outbound interface. |
Scenario where public-network routes are used to import traffic to a TE or LDP over TE tunnel |
|
Use the auto route mechanism |
A TE tunnel is used as a logical link for IGP route calculation. A tunnel interface is used as an outbound interface of a route. The auto route mechanism can be implemented in either of the following modes:
|
||
Policy-Based Routing |
The policy-based routing (PBR) allows a device to select routes based on user-defined policies. TE PBR, the same as IP unicast PBR, is implemented by defining a set of matching rules and behaviors. The rules and behaviors are defined using the apply clause with a TE tunnel interface used as an outbound interface. If packets do not match PBR rules, they are properly forwarded using IP; if they match PBR rules, they are forwarded over specific tunnels. |
- |
|
Tunnel Policy |
By default, VPN traffic is forwarded through LDP LSPs tunnels. If the default LDP LSPs cannot meet VPN traffic requirement, a tunnel policy is used to direct VPN traffic to a TE tunnel. The tunnel policy may be a tunnel type prioritizing policy or a tunnel binding policy. Select either of the following policies as needed:
|
VPN scenario |
The preceding methods to import traffic to MPLS TE tunnels apply only to P2P tunnels.
Configuring the IGP Shortcut
An IGP shortcut is configured on the ingress of a CR-LSP. The IGP shortcut prevents a route of a CR-LSP from being advertised to neighbors or used by the neighbors.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
- Run mpls te igp shortcut [ isis | ospf ] or mpls te igp shortcut isis hold-time interval
The IGP shortcut is configured.
hold-time interval specifies the period after which IS-IS responds to the Down status of the TE tunnel.
If a TE tunnel goes Down and this parameter is not specified, IS-IS recalculates routes. If this parameter is specified, IS-IS responds to the Down status of the TE tunnel after only the specified interval elapses. It either recalculates routes or not depending on the TE tunnel status:- If the TE tunnel goes Up, IS-IS does not recalculate routes.
- If the TE tunnel goes Down, IS-IS still recalculates routes.
- Run mpls te igp metric { absolute | relative } value
The IGP metric of the TE tunnel is configured.
Either of the following parameters is set when configuring the metric value used by a TE tunnel during IGP shortcut path calculation:
If absolute is configured, the TE tunnel metric value is equal to the configured metric value.
If relative is configured, the TE tunnel metric value is equal to the sum of the IGP route metric value and relative TE tunnel metric value.
- For IS-IS, run isis enable [ process-id ]
IS-IS is enabled on the tunnel interface.
- For OSPF, run the following commands in sequence.
- Run ospf enable process-id area area-id
OSPF is enabled on the tunnel interface.
You can also run the network address wildcard-mask command in the OSPF view to enable network segment routing on the tunnel interface.
Run quit
The system view is displayed.
Run ospf [ process-id ]
The OSPF view is displayed.
Run enable traffic-adjustment
IGP Shortcut is enabled.
- Run ospf enable process-id area area-id
- Run commit
The configuration is committed.
Follow-up Procedure
- For IS-IS, run the following commands in sequence.
Run system-view
The system view is displayed.
Run isis [ process-id ]
An IS-IS process is created, and the IS-IS process view is displayed.
Run avoid-microloop te-tunnel
The IS-IS TE tunnel anti-microloop function is enabled.
(Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-delay
The delay in delivering the IS-IS routes whose outbound interface is a TE tunnel interface is set.
Run commit
The configuration is committed.
- For OSPF, run the following commands in sequence.
Run system-view
The system view is displayed.
Run ospf [ process-id ]
The OSPF view is displayed.
Run avoid-microloop te-tunnel
The OSPF TE tunnel anti-microloop function is enabled.
(Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-delay
The delay in delivering the OSPF routes whose outbound interface is a TE tunnel interface is set.
Run commit
The configuration is committed.
Configuring Forwarding Adjacency
The forwarding adjacency is configured on the ingress of a CR-LSP. The forwarding adjacency allows a route of a CR-LSP to be advertised to neighbors so that these neighbors can use this CR-LSP to transmit traffic.
Context
A routing protocol performs bidirectional detection on a link. The forwarding adjacency needs to be enabled on both ends of a tunnel. The forwarding adjacency allows a node to advertise a CR-LSP route to other nodes. Another tunnel for transferring data packets in the reverse direction must be configured.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of an MPLS TE tunnel interface is displayed.
- Run mpls te igp advertise [ hold-time interval | include-ipv6-isis ] *
The forwarding adjacency is configured.
If IPv6 IS-IS is used, the include-ipv6-isis parameter must be configured.
- Run mpls te igp metric { absolute | relative } value
The IGP metric value of the MPLS TE tunnel is set.
A proper IGP metric value helps correctly advertise and use a CR-LSP route. The metric value of a CR-LSP must be less than the metric value of an unwanted IGP route.
If relative is configured and IS-IS is used as an IGP, this step cannot modify the IS-IS metric value. To change the IS-IS metric value, configure absolute in this step.
- You can select either of the following modes to enable the forwarding adjacency.
For IS-IS, run isis enable [ process-id ]
IS-IS is enabled on the tunnel interface.
- For OSPF, run the following commands in sequence.
- Run the ospf enable [ process-id ] area { area-id | areaidipv4 } command to enable OSPF on the tunnel interface.
Run the quit command to return to the system view.
Run the ospf [ process-id ] command to enter the OSPF view.
Run the enable traffic-adjustment advertise command to enable the forwarding adjacency.
- Run commit
The configuration is committed.
(Optional) Configuring CBTS
Service class can be set for packets that MPLS TE tunnels allow to pass through.
Context
When services recurse to multiple TE tunnels, the mpls te service-class command is run on the TE tunnel interface to set a service class so that a TE tunnel transmits services of a specified service class.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
- Run mpls te service-class { service-class & <1-8> | default }
A service class is set for packets that an MPLS TE tunnel allows to pass through.
This command is used only on the ingress of an MPLS TE tunnel.
If the mpls te service-class command is run repeatedly on a tunnel interface, the latest configuration overrides the previous one.
- Run commit
The configuration is committed.
Configuring Static BFD for TE
This section describes how to configure static BFD for TE to detect faults in a TE tunnel.
Enabling BFD Globally
BFD must be enabled globally before configurations relevant to BFD are performed.
Setting BFD Parameters on the Ingress
After setting BFD parameters on the ingress, you can use BFD sessions to monitor a TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Perform either of the following operations:
Configure a BFD session to monitor a TE tunnel:
bfd session-name bind mpls-te interface interface-type interface-number [ one-arm-echo]
If the specified tunnel is down, no BFD session can be configured.
- Run discriminator local discr-value
The local discriminator of the BFD session is set.
- Run discriminator remote discr-value
The remote discriminator of the BFD session is set.
The local discriminator of the local device and the remote discriminator of the remote device are the same. The remote discriminator of the local device and the local discriminator of the remote device are the same. A discriminator inconsistency causes the BFD session to fail to be established.
Remote discriminators do not need to be set for one-arm sessions.
- (Optional) Run min-tx-interval tx-interval
The local minimum interval at which BFD packets are sent is set.
This command cannot be run for a one-arm-echo session.
- Effective local interval at which BFD packets are sent = MAX { Configured local interval at which BFD packets are sent, Configured remote interval at which BFD packets are received }
- Effective local interval at which BFD packets are received = MAX { Configured remote interval at which BFD packets are sent, Configured local interval at which BFD packets are received }
- Effective local detection interval = Effective local interval at which BFD packets are received x Configured remote detection multiplier
For example:
The local interval at which BFD packets are sent is set to 200 ms, the local interval at which BFD packets are received is set to 300 ms, and the local detection multiplier is set to 4.
The remote interval at which BFD packets are sent is set to 100 ms, the remote interval at which BFD packets are received is set to 600 ms, and the remote detection multiplier is set to 5.
Then,
Effective local interval at which BFD packets are sent = MAX { 200 ms, 600 ms } = 600 ms; effective local interval at which BFD packets are received = MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x 5 = 1500 ms
Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300 ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } = 600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
- (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is set.
For a one-arm-echo session, run the min-echo-rx-interval interval command to configure the minimum interval at which the local device receives BFD packets.
- (Optional) Run detect-multiplier multiplier
The BFD detection multiplier is set.
- Run commit
The configurations are committed.
Setting BFD Parameters on the Egress
This section describes how to set BFD parameters on the egress to monitor CR-LSPs using BFD sessions.
Procedure
- Run system-view
The system view is displayed.
- The IP link, LSP, or TE tunnel can be used as the reverse tunnel to inform the ingress of a fault. If there is a reverse LSP or a TE tunnel, use the reverse LSP or the TE tunnel. If no LSP or TE tunnel is established, use an IP link as a reverse tunnel. If the configured reverse tunnel requires BFD detection, you can configure a pair of BFD sessions for it. Run the following commands as required:
Configure a BFD session to monitor reverse channels.
- For an IP link, run bfd session-name bind peer-ip ip-address [ vpn-instance vpn-name ] [ source-ip ip-address ]
For an LDP LSP, run bfd session-name bind ldp-lsp peer-ip ip-address nexthop ip-address [ interface interface-type interface-number ]
For a CR-LSP, run bfd session-name bind mpls-te interface tunnel interface-number te-lsp [ backup ]
For a TE tunnel, run bfd session-name bind mpls-te interface tunnel interface-number
- Run discriminator local discr-value
The local discriminator of the BFD session is set.
- Run discriminator remote discr-value
The remote discriminator of the BFD session is set.
The local discriminator of the local device and the remote discriminator of the remote device are the same. The remote discriminator of the local device and the local discriminator of the remote device are the same. A discriminator inconsistency causes the BFD session to fail to be established.
- (Optional) Run min-tx-interval tx-interval
The local minimum interval at which BFD packets are sent is configured.
If an IP link is used as a reverse tunnel, this parameter is inapplicable.
- Effective local interval at which BFD packets are sent = MAX { Configured local interval at which BFD packets are sent, Configured remote interval at which BFD packets are received }
- Effective local interval at which BFD packets are received = MAX { Configured remote interval at which BFD packets are sent, Configured local interval at which BFD packets are received }
- Effective local detection interval = Effective local interval at which BFD packets are received x Configured remote detection multiplier
For example:
The local interval at which BFD packets are sent is set to 200 ms, the local interval at which BFD packets are received is set to 300 ms, and the local detection multiplier is set to 4.
The remote interval at which BFD packets are sent is set to 100 ms, the remote interval at which BFD packets are received is set to 600 ms, and the remote detection multiplier is set to 5.
Then,
Effective local interval at which BFD packets are sent = MAX { 200 ms, 600 ms } = 600 ms; effective local interval at which BFD packets are received = MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x 5 = 1500 ms
Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300 ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } = 600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
- (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is set.
- (Optional) Run detect-multiplier multiplier
The BFD detection multiplier is set.
- Run commit
The configurations are committed.
Verifying the Configuration of Static BFD for TE
After configuring static BFD for TE, you can view configurations, such as the status of the BFD sessions.
Procedure
- Run the display bfd session mpls-te interface tunnel-name [ verbose ] command to check information about BFD sessions on the ingress.
- Run the following commands to check information about BFD sessions on the egress.
- Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ] command to check information about all BFD sessions.
- Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ] command to check information about static BFD sessions.
- Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-name ] [ verbose ] command to check information about BFD sessions with reverse IP links.
- Run the display bfd session ldp-lsp peer-ip peer-ip [ nexthop nexthop-ip [ interface interface-type interface-number ] ] [ verbose ] command to check information about BFD sessions with reverse LDP LSPs.
- Run the display bfd session mpls-te interface tunnel-name te-lsp [ verbose ] command to check information about BFD sessions with reverse CR-LSPs.
- Run the display bfd session mpls-te interface tunnel-name [ verbose ] command to check information about BFD sessions with reverse TE tunnels.
- Run the following commands to check BFD statistics.
- Run the display bfd statistics session all [ for-ip | for-lsp | for-te ] command to check statistics about all BFD sessions.
- Run the display bfd statistics session static [ for-ip | for-lsp | for-te ] command to check statistics about static BFD sessions.
- Run the display bfd statistics session peer-ip peer-ip [ vpn-instance vpn-name ] command to check statistics about BFD sessions with reverse IP links.
- Run the display bfd statistics session ldp-lsp peer-ip peer-ip [ nexthop nexthop-ip [ interface interface-type interface-number ] ] command to check statistics about BFD sessions with reverse LDP LSPs.
- Run the display bfd statistics session mpls-te interface interface-type interface-number te-lsp command to check statistics about BFD sessions with reverse CR-LSPs.
Configuring MPLS TE Manual FRR
MPLS TE manual FRR is a local protection mechanism that protects traffic on a link or a node on a CR-LSP.
Usage Scenario
FRR provides rapid local protection for MPLS TE networks requiring high reliability. If a local failure occurs, FRR rapidly switches traffic to a bypass tunnel, minimizing the impact on traffic.
A backbone network has a large capacity and its reliability requirements are high. If a link or node failure occurs on the backbone network, a mechanism is required to provide automatic protection and rapidly remove the fault. The Resource Reservation Protocol (RSVP) usually establishes MPLS TE LSPs in Downstream on Demand (DoD) mode. If a failure occurs, Constraint Shortest Path First (CSPF) can re-calculate a reachable path only after the ingress is notified of the failure. The failure may trigger reestablishment of multiple LSPs and the reestablishment fails if bandwidth is insufficient. Either the CSPF failure or bandwidth insufficiency delays the recovery of the MPLS TE network.
Configuring TE FRR on MPLS TE-enabled interfaces allows traffic to automatically switch to a protection link if a link or node on a primary tunnel fails. After the primary tunnel recovers or is reestablished, traffic switches back to the primary tunnel. This process meets the reliability requirements of the MPLS TE network.
FRR requires reserved bandwidth for a bypass tunnel that needs to be pre-established. If available bandwidth is insufficient, FRR protects only important nodes or links along a tunnel.
RSVP-TE tunnels using bandwidth reserved in Shared Explicit (SE) style support FRR, but static TE tunnels do not.
Pre-configuration Tasks
Before configuring MPLS TE manual FRR, complete the following tasks:
Establish a primary RSVP-TE tunnel.
Enable MPLS TE and RSVP-TE in the MPLS and physical interface views on every node along a bypass tunnel. (See Enabling MPLS TE and RSVP TE.)
(Optional) Configure TE attributes for the links of bypass tunnel. (See (Optional) Configuring TE Attributes.)
Enable CSPF on a Point of Local Repair (PLR).
(Optional) Configure an explicit path for the bypass tunnel.
Enabling TE FRR
TE FRR must be enabled on the ingress of a primary LSP before TE FRR is manually configured.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the primary tunnel interface is displayed.
- Run mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
After TE FRR is enabled using the mpls te fast-reroute command, run the mpls te bypass-attributes command to set bypass LSP attributes.
- (Optional) Run mpls te frr-switch degrade
The MPLS TE tunnel is enabled to mask the FRR function.
After TE FRR takes effect, traffic is switched to the bypass LSP when the primary LSP fails. If the bypass LSP is not the optimal path, traffic congestion easily occurs. To prevent traffic congestion, you can configure LDP to protect TE tunnels. To have the LDP protection function take effect, you need to run the mpls te frr-switch degrade command to enable the MPLS TE tunnel to mask the FRR function. After the command is run:
If the primary LSP is in the FRR-in-use state (that is, traffic has been switched to the bypass LSP), traffic cannot be switched to the primary LSP.
If HSB is configured for the tunnel and an HSB LSP is available, traffic is switched to the HSB LSP.
If no HSB LSP is available for the tunnel, the tunnel is unavailable, and traffic is switched to another tunnel like an LDP tunnel.
If no tunnels are available, traffic is interrupted.
- Run commit
The configuration is committed.
Configuring a Bypass Tunnel
A path and attributes must be configured for a bypass tunnel after TE manual FRR is enabled on a PLR.
Context
Bypass tunnels are established on selected links or nodes that are not on the protected primary tunnel. If a link or node on the protected primary tunnel is used for a bypass tunnel and fails, the bypass tunnel also fails to protect the primary tunnel.
- TE FRR does not take effect if multiple nodes or links fail simultaneously. After FRR switching is performed to switch data from the primary tunnel to a bypass tunnel, the bypass tunnel must remain Up when forwarding data. If the bypass tunnel goes Down, the protected traffic is interrupted, and FRR fails. Even though the bypass tunnel goes Up again, traffic is unable to flow through the bypass tunnel but travels through the primary tunnel after the primary tunnel recovers or is reestablished.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The view of the bypass tunnel interface is displayed.
- Run tunnel-protocol mpls te
MPLS TE is configured.
- Run destination ip-address
The LSR ID of an MP is configured as the destination address of the bypass tunnel.
- Run mpls te tunnel-id tunnel-id
The bypass tunnel ID is configured.
- (Optional) Run mpls te path explicit-path path-name [ secondary ]
An explicit path is configured for the bypass tunnel.
Physical links of a bypass tunnel cannot overlap protected physical links of the primary tunnel.
- (Optional) Run mpls te bandwidth ct0 bandwidth
Set the bandwidth for the bypass tunnel.
- Run mpls te bypass-tunnel
A bypass tunnel is configured.
- Run mpls te protected-interface interface-type interface-number
The interface to be protected by the bypass tunnel is specified.
- Run commit
The configuration is committed.
Follow-up Procedure
Routes and labels are automatically recorded after a bypass tunnel is configured.
When a tunnel is specified to protect a physical interface, the corresponding LSP becomes the bypass LSP. A bypass tunnel is established over a configured explicit path on the PLR.
If a primary tunnel fails, traffic switches to a bypass tunnel. If the bypass tunnel goes Down, the protected traffic is interrupted, and FRR fails. Even though the bypass tunnel goes Up, traffic cannot be forwarded. Traffic will be forwarded only after the primary tunnel has been restored or re-established.
The mpls te fast-reroute command and the mpls te bypass-tunnel command cannot be configured on the same tunnel interface.
After FRR switches traffic from a primary tunnel to a bypass tunnel, the bypass tunnel must be kept Up, and its path must remain unchanged when transmitting traffic. If the bypass tunnel goes Down, the protected traffic is interrupted, and FRR fails.
(Optional) Setting the FRR Switching Delay Time
After the FRR switching delay time is set, FRR entry delivery is delayed, preventing traffic from being switched twice when both HSB and FRR are enabled.
(Optional) Enabling the Coexistence of Rapid FRR Switching and MPLS TE HSB
When FRR and HSB are enabled for MPLS TE tunnels, enabling the coexistence of MPLS TE HSB and rapid FRR switching improves switching performance.
Context
Before enabling the coexistence of rapid FRR switching and MPLS TE HSB, TE FRR must be deployed on the entire network. HSB must be deployed on the ingress, BFD for TE LSP must be enabled, and the delayed down function must be enabled on the outbound interface of the P. Otherwise, rapid switching cannot be performed in case of the double points of failure.
Verifying the MPLS TE Manual FRR Configuration
After configuring MPLS TE manual FRR, you can view detailed information about the bypass tunnel.
Procedure
- Run the display mpls lsp command to check information about the primary tunnel.
- Run the display mpls te tunnel-interface command to check information about the tunnel interface on the ingress of a primary or bypass tunnel.
- Run the display mpls te tunnel path command to check information about paths of a primary or bypass tunnel.
Configuring MPLS TE Auto FRR
MPLS TE Auto FRR is a local protection mechanism that protects traffic on a link or a node on a CR-LSP.
Usage Scenario
On a network that requires high reliability, FRR is configured to improve network reliability. If the network topology is complex and a great number of links must be configured, the configuration procedure is complex.
Auto FRR automatically establishes an eligible bypass tunnel, which simplifies configurations.
MPLS TE Auto FRR, similar to MPLS TE manual FRR, can be performed in the RSVP GR process. For details about MPLS TE manual FRR, see Configuring MPLS TE Manual FRR.
Only a primary CR-LSP supports MPLS TE Auto FRR.
SRLG
In MPLS TE Auto FRR, if the shared risk link group (SRLG) attribute is configured, the primary and bypass tunnels must be in different SRLGs. If they are in the same SRLG, the bypass tunnel cannot be established.
- Bandwidth protection takes precedence over non-bandwidth protection.
- Node protection takes precedence over link protection.
- Manual protection takes precedence over auto protection.
Pre-configuration Tasks
Before configuring MPLS TE Auto FRR, complete the following tasks:
Set up a primary RSVP-TE tunnel.
Enable MPLS, MPLS TE, and RSVP-TE in the system and interface views on every node along a bypass tunnel. (See Enabling MPLS TE and RSVP-TE.)
(Optional) Configure the physical bandwidth for a bypass tunnel if the primary tunnel bandwidth needs to be protected. (See (Optional) Configuring TE Attributes.)
Enable CSPF on the ingress and transit nodes along a primary tunnel.
Enabling TE Auto FRR
MPLS TE Auto FRR must be enabled on the ingress or a transit node of a primary tunnel before MPLS TE Auto FRR is configured.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te auto-frr [ self-adapting ]
MPLS TE Auto FRR is enabled globally.
To enable an automatic bypass tunnel to dynamically select node or link protection based on network conditions, configure self-adapting.
- Run quit
The system view is displayed again.
- Run interface interface-type interface-number
The view of the outbound interface of the primary tunnel is displayed.
- (Optional) Run mpls te auto-frr { block | default | link | node | self-adapting }
TE Auto FRR is enabled on the interface.
The mpls te auto-frr default command is run for all MPLS TE-enabled interfaces after MPLS TE Auto FRR is enabled globally. To disable TE Auto FRR on interfaces, run the mpls te auto-frr block command on these interfaces. After the mpls te auto-frr block command is run on an interface, the interface is incapable of TE Auto FRR, regardless of whether TE Auto FRR is enabled or re-enabled globally.
To enable an automatic bypass tunnel bound to the current tunnel to dynamically select node or link protection based on network conditions, configure self-adapting.
If the mpls te auto-frr default command is configured in the interface view, the Auto FRR capability on the interface is consistent with the global Auto FRR capability.
After node protection is enabled, if an automatic bypass tunnel cannot be established due to none available links, the penultimate hop (not other hops) on the primary tunnel attempts to establish an automatic bypass tunnel to implement link protection.
If the mpls te auto-frr node command without self-adapting configured is run, and the requirement for node protection is not met, the penultimate hop (but not other hops) on the primary tunnel attempts to set up an automatic bypass tunnel for link protection.
- Run commit
The configuration is committed.
Enabling MPLS TE FRR and Configuring Attributes for an Automatic Bypass LSP
After MPLS TE FRR is enabled on the ingress of a primary LSP, a bypass LSP is established automatically.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
- Run mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
If TE FRR bandwidth protection is needed, configure the bandwidth parameter in this command.
- (Optional) Run mpls te frr-switch degrade
The MPLS TE tunnel is enabled to mask the FRR function.
After TE FRR takes effect, traffic is switched to the bypass LSP when the primary LSP fails. If the bypass LSP is not the optimal path, traffic congestion easily occurs. To prevent traffic congestion, you can configure LDP to protect TE tunnels. To have the LDP protection function take effect, you need to run the mpls te frr-switch degrade command to enable the MPLS TE tunnel to mask the FRR function. After the command is run:
If the primary LSP is in the FRR-in-use state (that is, traffic has been switched to the bypass LSP), traffic cannot be switched to the primary LSP.
If HSB is configured for the tunnel and an HSB LSP is available, traffic is switched to the HSB LSP.
If no HSB LSP is available for the tunnel, the tunnel is unavailable, and traffic is switched to another tunnel like an LDP tunnel.
If no tunnels are available, traffic is interrupted.
- (Optional) Run mpls te bypass-attributes [ bandwidth bandwidth | priority setup-priority [ hold-priority ] ]
Attributes are set for the automatic bypass LSP.
The bandwidth attribute can only be set for the bypass LSP after the mpls te fast-reroute bandwidth command is run for the primary LSP.
The bypass LSP bandwidth cannot exceed the primary LSP bandwidth.
If no attributes are configured for an automatic bypass LSP, by default, the automatic bypass LSP uses the same bandwidth as that of the primary LSP.
The setup priority of a bypass LSP must be lower than or equal to the holding priority. These priorities cannot be higher than the corresponding priorities of the primary LSP.
If TE FRR is disabled, the bypass LSP attributes are automatically deleted.
- Run quit
Return to the system view.
- Run interface interface-type interface-number
The interface view of the link through which the primary LSP passes is displayed.
- (Optional) Run mpls te auto-frr attributes { bandwidth bandwidth | priority setup-priority [ hold-priority ] | hop-limit hop-limit-value }
Attributes are configured for the bypass LSP.
- Run quit
Return to the system view.
- (Optional) Configure affinities for the automatic bypass tunnel.
Affinities determine link attributes of an automatic bypass LSP. Affinities and a link administrative group attribute are used together to determine over which links the automatic bypass LSP can be established.
Perform either of the following configurations:
Set a hexadecimal number.
Run interface interface-type interface-number
The interface view of the link through which the bypass LSP passes is displayed.
Run mpls te link administrative group value
An administrative group attribute is specified.
(Optional) Run mpls te auto-frr attributes affinity property properties [ mask mask-value ] or mpls te auto-frr attributes affinity { include-all | include-any | exclude } bit-name &<1-32>
An affinity for a bypass LSP is configured.
Run quit
Return to the system view.
Run interface tunnel tunnel-number
The view of the primary tunnel interface is displayed.
Run mpls te bypass-attributes affinity property properties [ mask mask-value]
An affinity of the bypass LSP is configured.
Set an affinity name.
Naming an affinity makes the affinity easy to understand and maintain. Setting an affinity name is recommended.
Run path-constraint affinity-mapping
An affinity name mapping template is configured, and the template view is displayed.
Repeat this step on each node used to calculate the path over which an automatic bypass LSP is established. The affinity name configured on each node must match the mappings between affinity bits and names.
Run attribute affinity-name bit-sequence bit-number
A mapping between an affinity bit and name is configured.
There are 128 affinity bits in total. This step can be performed to configure one affinity bit. You can repeat this step to configure some or all affinity bits
Run quit
Return to the system view.
Run interface interface-type interface-number
The interface view of the link through which the bypass LSP passes is displayed.
Run mpls te link administrative group name bit-name &<1-32>
An administrative group attribute is specified.
Run quit
Return to the system view.
Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
Run mpls te bypass-attributes affinity { include-all | include-any | exclude } bit-name &<1-32>
An affinity of the bypass LSP is configured.
If an automatic bypass LSP that satisfies the specified affinity cannot be established, a node will bind a manual bypass LSP satisfying the specified affinity to the primary LSP.
- Run commit
The configuration is committed.
(Optional) Configuring Auto Bypass Tunnel Re-Optimization
Auto bypass tunnel re-optimization allows paths to be recalculated at certain intervals for an auto bypass tunnel. If an optimal path is recalculated, a new auto bypass tunnel will be set up over this optimal path. In this manner, network resources are optimized.
Context
Network changes often cause the changes in optimal paths. Auto bypass tunnel re-optimization allows the system to re-optimize an auto bypass tunnel if an optimal path to the same destination is found due to some reasons, such as the changes in the cost. In this manner, network resources are optimized.
This configuration task is invalid for LSPs in the FRR-in-use state.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te auto-frr reoptimization [ frequency interval ]
Auto bypass tunnel re-optimization is enabled.
- (Optional) Run return
Return to the user view.
- (Optional) Run mpls te reoptimization [ auto-tunnel name tunnel-interface | tunnel tunnel-number ]
Manual re-optimization is enabled.
After you configure the automatic re-optimization in the MPLS view, you can return to the user view and run the mpls te reoptimization command to immediately re-optimize the tunnels on which the automatic re-optimization is enabled. After you perform the manual re-optimization, the timer of the automatic re-optimization is reset and counts again.
- Run commit
The configurations are committed.
Verifying the MPLS TE Auto FRR Configuration
After configuring MPLS TE Auto FRR, you can view detailed information about the bypass tunnel.
Procedure
- Run the display mpls te tunnel verbose command to check the binding of a primary tunnel and an automatic bypass tunnel.
- Run the display mpls te tunnel-interface command to check detailed information about an automatic bypass tunnel.
- Run the display mpls te tunnel path command to check information about paths of a primary or bypass tunnel.
Configuring MPLS Detour FRR
MPLS detour FRR automatically creates a different backup LSP for the primary LSP on each eligible node to protect downstream links or nodes.
Context
TE FRR provides local link or node protection for TE tunnels. TE FRR is working in either facility backup or one-to-one backup. Table 1-937 compares facility backup and one-to-one backup.
Mode |
Advantages |
Disadvantage |
---|---|---|
Facility backup |
Is extensible, resource efficient, and easy to implement. |
Bypass tunnels must be manually planned and configured, which is time-consuming and laborious on a complex network. |
One-to-one backup |
Is easy to configure, eliminates manual network planning, and provides flexibility on a complex network. |
Each node has to maintain the status of detour LSPs, which consumes additional bandwidth resources. |
TE FRR in one-to-one backup mode is also called MPLS detour FRR. Each eligible node automatically creates a detour LSP.
This section describes how to configure MPLS detour FRR. For information about how to configure TE FRR in facility backup mode, see Configuring MPLS TE Manual FRR and Configuring MPLS TE Auto FRR.
The facility backup and one-to-one backup modes are mutually exclusive on the same TE tunnel interface. If both modes are configured, the latest configured mode overrides the previous one.
After MPLS detour FRR is configured, nodes on a TE tunnel are automatically enabled to record routes and labels. Before you disable the route and label record functions, disable MPLS detour FRR.
Pre-configuration Tasks
Before configuring MPLS detour FRR, configure an RSVP-TE tunnel.
CSPF must be enabled on each node along both the primary and backup RSVP-TE tunnels.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
The TE tunnel interface view is displayed.
- Run mpls te detour
MPLS detour FRR is enabled.
If you run the mpls te detour and mpls te fast-reroute commands on the same tunnel interface, the latest configuration overrides the previous one.
- Run commit
The configuration is committed.
Verifying the Configuration
After configuring MPLS detour FRR, run the following commands to check the configurations.
Run the display mpls te tunnel command to view TE tunnel information.
Run the display mpls te tunnel path command to view TE tunnel path information on a local node.
Run the display mpls rsvp-te psb-content command to view RSVP-TE path state block (PSB) information.
Disabling MPLS Detour FRR
If MPLS detour FRR is disabled on a transit node or egress node, the ingress node excludes the node when calculating a detour LSP and does not occupy forwarding resources of the node.
Context
After MPLS detour FRR is enabled on a tunnel, the ingress node calculates a detour LSP to protect the tunnel if the tunnel fails. Some transit nodes or the egress node may not support MPLS detour FRR, but they can still function as protection nodes along a detour LSP.
To disable MPLS detour FRR, run the mpls rsvp-te detour disable command in the MPLS view. After the mpls rsvp-te detour disable command is run, detour LSPs that are not in the FRR-in-use state are deleted.
Configuring a Tunnel Protection Group
This section describes how to configure a tunnel protection group. A protection tunnel can be bound to a working tunnel to form a tunnel protection group. If the working tunnel fails, traffic switches to the protection tunnel. The tunnel protection group helps improve tunnel reliability.
Usage Scenario
A tunnel protection group provides end-to-end protection for traffic transmitted along TE tunnel. If a working tunnel fails, bidirectional automatic protection switching switches traffic to the protection tunnel.
In an MPLS OAM for associated or co-routed LSP scenario where tunnel APS is configured, if the primary and backup tunnels use the same path and the path fails, both the tunnels are affected, and services may be interrupted.
A protected tunnel is called a working tunnel. A tunnel that protects the working tunnel is called a protection tunnel. The working and protection tunnels form a tunnel protection group. A tunnel protection group works in 1:1 mode. In 1:1 mode, one protection tunnel protects only one working tunnel.
Working and protection tunnels
Tunnel-specific attributes in a tunnel protection group are independent from each other. For example, a protection tunnel with bandwidth 50 Mbit/s can protect a working tunnel with 100 Mbit/s bandwidth.
TE FRR can be enabled to protect the working tunnel.
A protection tunnel cannot be protected by other tunnels or have TE FRR enabled.
Protection switching mechanism
The NetEngine 8000 F performs protection switching based on the following rules.
Table 1-938 Switching rulesSwitching Request
Priority
Description
Clear
Highest
Clears all switching requests initiated manually, including forcible and manual switch requests. A signal failure will trigger traffic switching.
Lockout of protection
↑
Prevents traffic from switching to a protection tunnel even if a working tunnel fails.
Signal Fail for Protection
↑
Switches traffic from a protection tunnel to a working tunnel if the protection tunnel fails.
Forcible switch
↑
Forcibly switches traffic from a working tunnel to a protection tunnel, regardless of whether the protection tunnel functions properly (unless a higher priority switch request takes effect).
Signal Fail for Working
↑
Switches traffic from a working tunnel to a protection tunnel if the working tunnel fails.
Manual switch
↑
Switches traffic from a working tunnel to a protection tunnel only when the protection tunnel functions properly or switches traffic from the protection tunnel to the working tunnel only when the working tunnel functions properly.
Wait to restore
↑
Switches traffic from a protection tunnel to a working tunnel after the working tunnel recovers, which happens after the wait-to-restore (WTR) timer elapses.
No request
Lowest
There is no switching request.
Pre-configuration Tasks
Before configuring an MPLS TE tunnel protection group, create an MPLS TE working tunnel and a protection tunnel.
A tunnel protection group uses a configured protection tunnel to protect a working tunnel, which improves tunnel reliability. Configuring working and protection tunnels over separate links is recommended.
The working and protection tunnels must be bidirectional. The following types of bidirectional tunnels are supported:
Static bidirectional associated LSPs
Dynamic bidirectional associated LSPs
Static bidirectional co-routed LSPs
Creating a Tunnel Protection Group
A configured protection tunnel can be bound to a working tunnel to form a tunnel protection group. If the working tunnel fails, traffic switches to the protection tunnel, which improves tunnel reliability.
Context
A tunnel protection group can be configured on the ingress to protect a working tunnel. The switchback delay time and a switchback mode can also be configured. If the revertive mode is used, the wait to restore (WTR) time can be set.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
The tunnel interface view is displayed.
- Run mpls te protection tunnel tunnel-id [ [ holdoff holdoff-time ] | [ mode { non-revertive | revertive [ wtr wtr-time ] } ] ] *
The working tunnel is added to a protection group.
The following parameters can be configured in this step:
tunnel-id specifies the ID of a protection tunnel.
holdoff-time specifies the period between the time when a signal failure occurs and the time when the protection switching algorithm is initiated upon notification of the signal fault. holdoff-time specifies a multiplier of 100 milliseconds.
Hold-off time = 100 milliseconds x holdoff-time
In non-revertive mode, traffic does not switch back to a working tunnel even after the working tunnel recovers.
In revertive mode, traffic switches back to a working tunnel after the working tunnel recovers.
The WTR time is the time elapses before traffic switching is performed. The wtr-time parameter specifies a multiplier of 30 seconds.
WTR time = 30 seconds x wtr-time
- Run commit
The configuration is committed.
- Configure a detection mechanism.
Follow-up Procedure
You can also perform the preceding steps to modify a protection group.
An MPLS TE tunnel protection group must be detected by MPLS OAM or MPLS-TP OAM to rapidly trigger protection switching if a fault occurs.
- If both the working and protection tunnels are static bidirectional associated LSPs, configure MPLS OAM for bidirectional associated LSPs.
- If both the working and protection tunnels are static bidirectional co-routed LSPs, configure MPLS OAM for bidirectional co-routed LSPs.
- If OAM is deleted before APS is deleted, APS incorrectly considers that OAM has detected a link fault, affecting protection switching. To resolve this issue, re-configure OAM.
After an MPLS TE tunnel protection group is created, if MPLS-TP OAM is used to detect faults and both the working and protection tunnels are static bidirectional co-routed CR-LSPs, configure MPLS-TP OAM for bidirectional co-routed LSPs.
(Optional) Configuring the Protection Switching Trigger Mechanism
This section describes how to configure the protection switching trigger mechanism for a tunnel protection group to forcibly switch traffic to the working or protection tunnel. Alternatively, you can perform a traffic switchover manually.
Context
Read switching rules before configuring the protection switching trigger mechanism.
Perform the following steps on the ingress of the tunnel protection group as needed:
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
The tunnel interface view is displayed.
- Select one of the following protection switching trigger methods as required:
To forcibly switch traffic from the working tunnel to the protection tunnel, run mpls te protect-switch force
To prevent traffic on the working tunnel from switching to the protection tunnel, run mpls te protect-switch lock
To switch traffic to the protection tunnel, run mpls te protect-switch manual
- To cancel the configuration of the protection switching trigger mechanism, run mpls te protect-switch clear
The preceding commands can take effect immediately after being run, without the commit command executed.
Verifying the Tunnel Protection Group Configuration
After configuring a tunnel protection group, run display commands to view information about the tunnel protection group and the binding between the working and protection tunnels.
Procedure
- Run the display mpls te protection tunnel { all | tunnel-id | interface tunnel-interface-name } [ verbose ] command to check information about a tunnel protection group.
- Run the display mpls te protection binding protect-tunnel { tunnel-id | interface tunnel-interface-name } command to check the binding between the working and protection tunnels.
Configuring an MPLS TE Associated Tunnel Group
If the bandwidth of a service tunnel is insufficient, you can create an associated tunnel group and specify both an original tunnel and its split tunnels to carry services.
Context
The bandwidth of an MPLS TE tunnel is limited, but the bandwidth of services carried by the tunnel cannot be limited. As a result, the tunnel bandwidth may become insufficient in some service scenarios, for example, when routes or VPN services recurse to the tunnel. To address this issue, you can create an associated tunnel group, specify the current tunnel as the original tunnel of the group, and specify split tunnels for the original tunnel. The split tunnels can carry services together with the original tunnel, relieving bandwidth pressure.
Configuring Bandwidth Information Flooding for MPLS TE
If the link bandwidth changes slightly, the threshold for flooding bandwidth information is set on the ingress or a transit node of a CR-LSP, which reduces flooding attempts and saves network resources.
Usage Scenario
To synchronize data between TEDBs in an IGP area, OSPF TE or IS-IS TE is configured to update TEDB information and flood bandwidth information if the remaining bandwidth changes on an MPLS interface.
- Configure flooding commands to enable immediate bandwidth information flooding on a device.
- Configure a flooding interval to enable periodic bandwidth information flooding on a device.
- Configure a flooding threshold to prevent frequent flooding.
- When the percentage of the bandwidth reserved for the MPLS TE tunnel on a link to the remaining link bandwidth in the TEDB is greater than or equal to the configured threshold (flooding threshold), OSPF TE and IS-IS TE flood link bandwidth information to all devices in this area and update TEDB information.
- When the percentage of the bandwidth released by the MPLS TE tunnel to the remaining link bandwidth in the TEDB is greater than or equal to the configured threshold, OSPF TE and IS-IS TE flood link bandwidth information to all devices in this area and update TEDB information.
Configuring the Limit Rate of MPLS TE Traffic
This section describes how to configure the limit rate of MPLS TE traffic to limit TE tunnel traffic within the bandwidth range that is actually configured.
Usage Scenario
For a physical link of a TE tunnel, besides traffic on the TE tunnel, the physical link may carry MPLS traffic of other TE tunnels, MPLS traffic of other non-CR-LSPs, or even IP traffic simultaneously. To limit the TE tunnel traffic within a bandwidth range that is actually configured, set a limit rate for TE tunnel traffic.
After the configuration of the limit rate, TE traffic is limited to a bandwidth range that is actually configured. TE traffic with the bandwidth higher than the set bandwidth is dropped.
Before you configure rate limiting for MPLS TE traffic, run the mpls te bandwidth command on the corresponding tunnel interface. If this command is not run, rate limiting is not performed for MPLS TE traffic.
Pre-configuration Tasks
Before configuring rate limiting for MPLS TE traffic, complete the following tasks:
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The tunnel interface view is displayed.
- Run mpls te bandwidth ctType ctValue
The bandwidth constraint of the MPLS TE tunnel is configured.
- Run mpls te lsp-tp outbound
Traffic policing for the MPLS TE tunnel is enabled.
- Run commit
The configuration is committed.
Configuring Tunnel Re-optimization
Tunnel re-optimization is configured to allow a device to dynamically adjust paths for tunnels.
Usage Scenario
Topology and link attributes of an IP/MPLS network are changeable. As a result, a path over which an MPLS TE tunnel has been established may not be optimal. Tunnel re-optimization can be configured to allow the MPLS TE tunnel to be reestablished over an optimal path.
Periodic re-optimization: The system attempts to reestablish tunnels over better paths (if there are) at a specified interval configured using the mpls te auto reoptimization command. This implementation prevents manual intervention and reduces configuration workload.
Manual re-optimization: Manual re-optimization can be configured for the TE tunnels if you want to immediately re-optimize the TE tunnels.
Tunnel re-optimization is performed based on tunnel path constraints. During path re-optimization, path constraints, such as explicit path constraints and bandwidth constraints, are also considered.
Tunnel re-optimization cannot be used on tunnels for which a system selects paths in most-fill tie-breaking mode.
Procedure
- (Optional) Enable IGP metric-based re-optimization for an MPLS TE tunnel.
Perform this step if you want to re-optimize an MPLS TE tunnel based only on the IGP metric. The following constraints are ignored during re-optimization:
- Bandwidth usage: A link is selected based on the percentage of the used reservable bandwidth to the maximum reservable bandwidth.
- Hop-counts: A link is selected based on the number of hops on the path.
- Configure automatic tunnel re-optimization.
- Configure manual tunnel re-optimization.
In the user view, run mpls te reoptimization [ auto-tunnel name tunnel-interface | tunnel tunnel-number ]
Manual re-optimization is configured.
Manual re-optimization can be enabled on a specific or all tunnels on a node.
Configuring Isolated LSP Computation
To improve label switched path (LSP) reliability on a network that has the constraint-based routed label switched path (CR-LSP) hot standby feature, you can configure the isolated LSP computation feature so that the device uses both the disjoint algorithm and the constrained shortest path first (CSPF) algorithm to compute isolated primary and hot-standby LSPs.
Context
Most IP radio access networks (IP RANs) that use Multiprotocol Label Switching (MPLS) TE have high reliability requirements for LSPs. However, the existing CSPF algorithm simplifies the LSP path according to the principle of minimizing the link cost, and cannot automatically calculate the completely separate primary and secondary LSP paths.
Specifying explicit paths can meet this reliability requirement; this method, however, does not adapt to topology changes. Each time a node is added to or deleted from the IP RAN, carriers must modify the explicit paths, which is time-consuming and laborious.
To resolve these problems, you can configure isolated LSP computation. After this feature is enabled, the disjoint and CSPF algorithms work together to compute primary and hot-standby LSPs at the same time and cut off crossover paths of the two LSPs. Then, the device gets the isolated primary and hot-standby LSPs.
Isolated LSP computation is a best-effort technique. If the disjoint and CSPF algorithms cannot get isolated primary and hot-standby LSPs or two isolated LSPs do not exist, the device uses the primary and hot-standby LSPs computed by CSPF.
After you enable the disjoint algorithm, the shared risk link group (SRLG), if configured, becomes ineffective.
Pre-configuration Tasks
Before configuring isolated LSP computation, complete the following tasks:
Configure CR-LSP backup and establish a hot-standby CR-LSP.
CSPF must be enabled on ingress node of the RSVP-TE tunnel.
Isolated LSP computation requires the collaboration of the CR-LSP hot standby feature and requires the hot-standby LSP to have the same reserved bandwidth as the primary LSP.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
The TE tunnel interface view is displayed.
- Run mpls te cspf disjoint
The disjoint algorithm is enabled.
- Run commit
The configuration is committed.
Checking the Configurations
Run the display mpls te cspf destination ip-address computation-mode disjoint command to check the computed primary and hot-standby LSPs after the disjoint algorithm is enabled.
Run the display mpls te tunnel path tunnel-name command to check information about the actual primary and hot-standby LSPs.
Configuring Automatic Tunnel Bandwidth Adjustment
This section describes how to configure the automatic adjustment of the tunnel bandwidth.
Usage Scenario
Automatic bandwidth adjustment is enabled to adjust the bandwidth of the tunnel automatically.
The system periodically collects traffic rates of outbound interfaces on the tunnel and calculates the average bandwidth of the tunnel within a specified period of time. The establishment of an LSP is requested based on the bandwidth constraint of the sampled maximum value of average bandwidth. After the LSP is established, the old LSP is torn down using the make-before-break feature, and the traffic is switched to the new LSP.
The sampling interval is configured in the MPLS view and takes effect on all MPLS TE tunnels. The rate of the outbound interface on an MPLS TE tunnel is recorded at each sampling interval. The actual average bandwidth assigned to the MPLS TE tunnel in a sampling interval can be obtained.
After automatic bandwidth adjustment is enabled, the mpls te timer auto-bandwidth command configures periodic sampling obtains the average bandwidth of the MPLS TE tunnel during a sampling interval. The system recalculates an average bandwidth based on sampling during a sampling interval and uses the bandwidth to establish an MPLS TE tunnel. Traffic switches to the new MPLS TE tunnel, and the original MPLS TE tunnel is torn down. If the MPLS TE tunnel fails to be established, traffic transmission continues along the original MPLS TE tunnel, and the bandwidth is adjusted after the next sampling interval expires.
Pre-configuration Tasks
Before configuring the bandwidth automatic adjustment, configure an RSVP-TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te timer auto-bandwidth [ interval ]
The sampling interval is specified. The actual sampling interval takes the larger value in the mpls te timer auto-bandwidth command and the set flow-stat interval command.
- Run quit
Return to the system view.
- Run interface tunnel interface-number
The tunnel interface view of the MPLS TE tunnel is displayed.
- Run statistic enable
MPLS TE tunnel statistics can be collected.
- To configure automatic bandwidth adjustment, run one of the following commands.
Run mpls te auto-bandwidth adjustment { [ threshold percent [ [ or ] absolute-bw absolute-bw ] ] | frequency interval | [ max-bw max-bandwidth | min-bw min-bandwidth ] * | [ overflow-limit overflow-limit-value ] | [ underflow-limit underflow-limit-value ] } *
The automatic bandwidth adjustment is enabled. The frequency and allowable bandwidth range for adjustment are configured.
The following policies can be configured to control automatic bandwidth adjustment:
frequency interval sets the interval for bandwidth adjustment. After the mpls te auto-bandwidth command is run, a device must accumulatively sample bandwidth values for at least two times within a configured interval time. If the device samples bandwidth values for less than two times within the specified interval, automatic bandwidth adjustment is not performed. The existing sampling times are counted in the next bandwidth adjustment interval.
You can use the threshold percent [ [ or ] absolute-bw absolute-bw ] parameter to determine whether or not to adjust bandwidth.
The system compares the average bandwidth within a sampling period with the actual bandwidth, and the bandwidth will be automatically adjusted if the ratio of bandwidth change to the actual bandwidth is greater than the threshold value. If the absolute threshold is set, bandwidth can be automatically adjusted only after the bandwidth change also exceeds the absolute threshold.
Therefore, if the network traffic changes frequently but frequent bandwidth adjustment is not needed, you can set a greater threshold value.
The or parameter is configured to perform a bitwise OR operation between the absolute bandwidth threshold and the threshold percentage for bandwidth adjustment. Automatic bandwidth adjustment is triggered once either the absolute bandwidth threshold or the threshold percentage is reached. If or is not configured, the bitwise AND operation is performed by default, so that automatic bandwidth adjustment can only be triggered after both the absolute bandwidth threshold and threshold percentage are reached.
- Automatic bandwidth adjustment is performed if conditions are met in either of the following situations depending on whether overflow-limit overflow-limit-value and underflow-limit underflow-limit-value parameters are configured:
- The two parameters are not configured, and the configured interval time expires. The average bandwidth exceeds the upper bandwidth adjustment threshold or falls below the lower bandwidth adjustment threshold.
- The two parameters are configured. The average bandwidth exceeds the upper bandwidth adjustment threshold or falls below the lower bandwidth adjustment threshold for a number of timers more than overflow-limit overflow-limit-value or less than underflow-limit underflow-limit-value.
Run mpls te auto-bandwidth collect-bw { [ frequency interval ] | [ max-bw max-bandwidth | min-bw min-bandwidth ] * } *
The frequency and allowable bandwidth range for collection are configured.
- Run commit
The configuration is committed.
Checking the Configurations
After completing the configuration, run the display mpls te tunnel-interface command on the ingress of the tunnel to view the following tunnel configuration information.
- Automatically adjusted bandwidth (Auto BW)
- Automatically adjusted frequency (Auto BW Freq)
- Minimum bandwidth that can be adjusted (Min BW)
- Maximum bandwidth that can be adjusted (Max BW)
- Current bandwidth that is collected (Current Collected BW)
Disabling Automatic Bandwidth Configuration for Physical Interfaces
You can disable automatic bandwidth configuration for physical interfaces during the network planning.
Usage Scenario
Automatic bandwidth configuration is enabled by default on a physical interface. Without this function enabled, after an NMS delivers the TE bandwidth of 0 kbit/s to a physical interface, the device uses the delivered bandwidth, not the actual bandwidth.
Pre-configuration Tasks
To disable automatic bandwidth configuration on a specific physical interface, enable MPLS TE on an existing interface.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
- Run mpls te physical-interface bandwidth-auto-config disable
Automatic bandwidth configuration on a physical interface is disabled.
- Run commit
The configuration is committed.
Disabling the Rerouting Function
The rerouting function enables MPLS TE to re-select another path for LSP reestablishment in the case that the original path fails. This section describes how to disable it.
Usage Scenario
If an LSP path fails, the LSP goes Down. The rerouting function allows MPLS TE to re-select another available path for LSP establishment. However, this may cause traffic congestion on the re-selected path. In this situation, run the mpls te reroute disable command to disable the rerouting function. The path over which the Up LSP is established then functions as a constraint path. Even if the path fails, MPLS TE still uses it for LSP reestablishment. However, manual intervention (for example, manually modifying the LSP configuration or triggering LSP rerouting) will delete the constraint path. After a new LSP goes up, the new LSP path serves as the new constraint path.
Locking the Tunnel Configuration
Tunnel configurations can be locked by the controller when the configurations are sent to a NetEngine 8000 F.
Usage Scenario
In service delivery, a controller delivers tunnel configurations to a NetEngine 8000 F. The NetEngine 8000 F uses the obtained configurations to create a tunnel or modify an existing tunnel. To prevent users from modifying such tunnel configurations, the controller delivers the mpls te lock command to lock the configurations, in addition to configurations to the NetEngine 8000 F. Before you modify the configuration of a tunnel, run the undo mpls te lock command to unlock the tunnel configuration on the tunnel interface.
Pre-configuration Tasks
Before locking the tunnel configuration, complete the following tasks:
Configure the controller to deliver tunnel configurations to a NetEngine 8000 F.
Assign a user management rights.
Configuring P2MP TE Tunnels
A P2MP TE tunnel provides sufficient network bandwidth and high reliability for multicast service transmission on an IP/MPLS backbone network.
Usage Scenario
- IP multicast technology: deployed on a live P2P network with upgraded software. This solution reduces upgrade and maintenance costs. IP multicast, similar to IP unicast, does not support QoS or TE capabilities and has low reliability.
- Dedicated multicast network: deployed using Asynchronous Transfer Mode (ATM) or synchronous optical network (SONET)/synchronous digital hierarchy (SDH) technologies. This solution provides high reliability and transmission rates, but has high construction costs and requires separate maintenance.
P2MP TE is such a solution. It provides high reliability and QoS as well as TE capabilities. P2MP TE can be implemented on an IP/MPLS backbone network by simple configuration to provide multicast services, which reduces upgrade and maintenance costs and helps network convergence.
Item |
Manual P2MP TE Tunnel |
Automatic P2MP TE Tunnel |
---|---|---|
Trigger method |
Manually triggered by users. |
Automatically triggered by services. |
Usage scenario |
Multicast services, excluding NG MVPN or multicast VPLS, are transmitted. |
NG MVPN or multicast VPLS services are transmitted. |
Traffic import method |
PIM or static IGMP is used to direct services to LSPs. |
Services are automatically directed to LSPs. |
Pre-configuration Tasks
Before configuring P2MP TE tunnels, complete the following tasks:
Configure OSPF or IS-IS to implement IP connectivity between nodes on the P2MP TE tunnel.
- Enable MPLS TE and RSVP-TE.
- Configure CSPF.
- Configure OSPF or IS-IS TE.
- (Optional) Configure MPLS TE attributes for links.
Enabling P2MP TE Globally
This section describes how to enable P2MP TE globally on each node before a P2MP TE tunnel is established.
(Optional) Disabling P2MP TE on an Interface
You can disable P2MP TE on a specific interface during the network planning.
Context
After P2MP TE is globally enabled, P2MP TE is automatically enabled on each MPLS TE-enabled interface on a local node. To disable P2MP TE on a specific interface during network planning or there is no need to have P2MP TE enabled on a specific interface because it does not support P2MP forwarding, disable P2MP TE on the specific interface.
Procedure
- Run system-view
The system view is displayed.
- Run interface interface-type interface-number
The interface view is displayed.
- Run mpls te p2mp-te disable
P2MP TE is disabled on the interface.
After the mpls te p2mp-te disable command is run, P2MP TE LSPs established on the interface are torn down, and newly configured P2MP TE LSPs on the interface fail to be established.
- Run commit
The configuration is committed.
(Optional) Setting Leaf Switching and Deletion Delays
To prevent two copies of traffic on a P2MP TE tunnel's egress, a leaf CR-LSP switchover hold-off time and a deletion hold-off time can be set for MBB.
Context
Before the primary sub-LSP is deleted, both the primary sub-LSP and Modified sub-LSP carry traffic. If the egress cannot receive traffic only from one sub-LSP, two copies of traffic exist. To prevent two copies of traffic, perform the following steps to reset the leaf CR-LSP switchover hold-off time and deletion hold-off time.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
MPLS is enabled globally and the MPLS view is displayed.
- Run mpls te
MPLS TE is enabled globally.
A leaf CR-LSP switchover hold-off time and deletion hold-off time can be set only after MPLS TE is enabled globally.
- Run mpls te p2mp-te leaf switch-delay switch-time delete-delay delete-time
The leaf MBB switchover delay and deletion delay are set.
After the mpls te p2mp-te leaf switch-delay switch-time delete-delay delete-time command is run, traffic may be interrupted in the following scenarios:The modified sub-LSP has been ready on the ingress and the primary sub-LSP has been deleted, but the modified sub-LSP is not ready on the egress. If this occurs, when the ingress switches traffic to the modified sub-LSP, traffic is temporarily interrupted.
The primary sub-LSP has been deleted, but the modified sub-LSP failure message cannot be immediately sent to the ingress. If this occurs, traffic is temporarily interrupted.
- Run commit
The configuration is committed.
Configuring Leaf Lists
Configuring a leaf list on an ingress specifies all leaf nodes on a P2MP TE tunnel. A leaf list helps you configure and manage these leaf nodes uniformly. The steps in this section must be performed if P2MP TE tunnels are manually established. The steps in this section are optional if the establishment of P2MP TE tunnels is automatically triggered by a service.
Context
For a P2MP TE tunnel, the path that originates from the ingress and is destined for each leaf node can be calculated either by constraint shortest path first (CSPF) or by planning an explicit path for a specific leaf node or each leaf node. After each leaf node is configured on an ingress, the ingress sends signaling packets to each leaf node and then establishes a P2MP TE tunnel. The NetEngine 8000 F uses leaf lists to configure and manage leaf nodes. All leaf nodes and their explicit paths are integrated into a table, which helps you configure and manage the leaf nodes uniformly.
An MPLS network that transmits multicast services selects dynamically leaf nodes on an automatic P2MP TE tunnel and uses constrained shortest path first (CSPF) to calculate a path destined for each leaf node. To control the leaf nodes of an automatic P2MP TE tunnel, configure a leaf list.
Explicit path planning requires you to configure an explicit path for a specific leaf node or each leaf node, and use the explicit path in the leaf list view.
- Remerge event: occurs when two sub-LSPs have different inbound interfaces but the same outbound interface on a transit node. Figure 1-2235 shows that a remerge event occurs on the outbound interface shared by two sub-LSPs.
- Crossover event: occurs when two sub-LSPs have different inbound and outbound interfaces on a transit node. Figure 1-2235 shows that a crossover event occurs on the transit node shared by the two sub-LSPs.
Procedure
- Run system-view
The system view is displayed.
- (Optional) Configure an explicit path to each leaf node.
For a P2MP TE tunnel, an explicit tunnel can be configured for a specific leaf node or each leaf node.
- Run mpls te leaf-list leaf-list-name
A leaf list is established for a P2MP RSVP-TE tunnel.
- Run destination leaf-address
A leaf node is established in the leaf list.
The leaf-address parameter specifies the MPLS LSR ID of each leaf node.
- (Optional) Run path explicit-path path-name
An explicit path is specified for the leaf node.
The explicit-path path-name parameter specifies the name of the explicit path established in Step 2.
Repeat Step 3 and Step 4 on a P2MP TE tunnel to configure all leaf nodes.
- Run commit
The configuration is committed.
Configuring a P2MP TE Tunnel Interface
Configuring a tunnel interface on an ingress helps you to manage and maintain the P2MP TE tunnel.
Context
A P2MP TE tunnel is established by binding multiple sub-LSPs to a P2MP TE tunnel interface. A network administrator configures a tunnel interface to manage and maintain the tunnel. After a tunnel interface is configured on an ingress, the ingress sends signaling packets to all leaf nodes to establish a tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
- Run either of the following commands to assign an IP address to the tunnel interface:
Run ip address ip-address { mask | mask-length } [ sub ]
An IP address is assigned to the tunnel interface.
Run ip address unnumbered interface interface-type interface-number
The tunnel interface is allowed to borrow the IP address of a specified interface.
A tunnel interface must obtain an IP address before it can forward traffic. An MPLS TE tunnel is unidirectional and does not need a peer address. Therefore, there is no need to specifically configure an IP address for the tunnel interface. A TE tunnel interface usually uses the ingress LSR ID as its IP address.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol.
- Run mpls te p2mp-mode
A P2MP TE tunnel is established.
- Run mpls te tunnel-id tunnel-id
A tunnel ID is set.
- Run mpls te leaf-list leaf-list-name
A leaf list is specified for the P2MP TE tunnel.
- (Optional) Perform the following operations to set other tunnel attributes as needed:Table 1-940 Operations
Operation
Description
Run mpls te bandwidth ct0 ct0-bw-value
The bandwidth is configured for the tunnel.
- A P2MP TE tunnel is established to provide sufficient network bandwidth for multicast services. Therefore, set the bandwidth value.
The bandwidth used by the tunnel cannot be higher than the maximum reservable link bandwidth.
Ignore this step if the TE tunnel is used only for changing the data transmission path.
Run mpls te record-route [ label ]
The route and label recording function for a manual P2MP TE tunnel is enabled.
This step enables nodes along a P2MP TE tunnel to use RSVP messages to record detailed P2MP TE tunnel information, including the IP address of each hop. The label parameter in the command enables RSVP messages to record label values.
Run mpls te resv-style { se | ff }
A resource reservation style is specified.
-
Run mpls te path metric-type { igp | te }
A link metric type used to select links is specified.
-
Run mpls te affinity property properties [ mask mask-value ]
An affinity and its mask are specified.
An affinity is a 32-bit vector value used to describe an MPLS link. An affinity and an administrative group attribute define the nodes through which an MPLS TE tunnel passes. Affinity masks determine the link properties that a device must check. If some bits in the mask are 1, at least one bit in an administrative group is 1, and the corresponding bit in the affinity must be 1. If the bits in the affinity are 0s, the corresponding bits in the administrative group cannot be 1.
You can use an affinity to control the nodes through which a manual P2MP TE tunnel passes.
Run mpls te hop-limit hop-limit-value
A hop limit is set for a manual P2MP TE tunnel.
The mpls te hop-limit command sets the maximum number of hops that each sub-LSP in a P2MP TE tunnel supports.
Run mpls te tie-breaking { least-fill | most-fill | random }
A rule for selecting a route among multiple routes to the same destination is specified.
-
Run mpls te priority setup-priority [ hold-priority ]
The priorities of the tunnel are set.
The setup priority of a tunnel must be no higher than its holding priority. A setup priority value must be greater than or equal to a holding priority value.
If resources are insufficient, setting the setup and holding priority values allows LSPs with lower priorities to release resources for establishing LSPs with higher priorities.
Run mpls te reoptimization [ frequency interval ]
Periodic re-optimization is enabled for a manual P2MP TE Tunnel.
Periodic re-optimization allows a P2MP TE tunnel to be automatically reestablished over a better path. After a better path to the same destination has been calculated for a certain reason, such as a cost change, a TE tunnel will be automatically reestablished, optimizing resources on a network.
Run mpls te lsp-tp outbound
Traffic policing is enabled for a manual P2MP TE tunnel.
Physical links over which a P2MP TE tunnel is established transmit traffic of other TE tunnels, traffic of non-CR LSP traffic, or even IP traffic, in addition to TE tunnel traffic. To limit TE traffic within a configured bandwidth range, run the mpls te lsp-tp outbound command.
- Run commit
The configuration is committed.
(Optional) Configuring a P2MP Tunnel Template
Attributes can be set in a P2MP tunnel template that is used to automatically establish P2MP TE tunnels.
Context
Attributes of an automatic P2MP TE tunnel can only be defined in a P2MP tunnel template, but cannot be configured on a tunnel interface because the automatic P2MP TE tunnel has no tunnel interface. When NG MVPN or multicast VPLS is deployed on a network, nodes that transmit multicast traffic can reference the template and use attributes defined in the template to automatically establish P2MP TE tunnels.
Procedure
- Run system-view
The system view is displayed.
- Run mpls te p2mp-template template-name
A P2MP tunnel template is created, and the P2MP tunnel template view is displayed.
- Select one or multiple operations.Table 1-941 Operations
Operation
Description
Run leaf-list leaf-list-name
A leaf list is configured.
When multicast services arrive at a node, the node automatically selects leaf nodes and establishes a sub-LSP destined for each leaf node.
This step enables a node that multicast services access to select leaf nodes in a specified leaf list.
NOTE:Before running the leaf-list command, the task described in configuring a leaf list must be complete. The leaf-list-name value in this step must specify an existing leaf list.
Run bandwidth ct0 bw-value
The CT0 bandwidth is set for the automatic P2MP TE tunnel.
Before bandwidth protection is provided for traffic transmitted along an automatic P2MP TE tunnel, run the bandwidth command to set the required bandwidth value for the tunnel. Nodes on the P2MP TE tunnel can then reserve bandwidth for services, which implements bandwidth protection.
Run record-route [ label ]
The route and label record function for an automatic P2MP TE tunnel is enabled.
This step enables nodes along an automatic P2MP TE tunnel to use RSVP messages to record detailed P2MP TE tunnel information, including the IP address of each hop. The label parameter in the record-route command enables RSVP messages to record label values.
Run resv-style { se | ff }
A resource reservation style is specified.
-
Run path metric-type { igp | te }
A link metric type used to select links is specified.
-
Run affinity property properties [ mask mask-value ] or affinity primary { include-all | include-any | exclude } bit-name &<1-32>
An affinity and its mask are specified.
An affinity is a 32-bit vector value used to describe an MPLS link. An affinity and an administrative group attribute define the nodes through which an MPLS TE tunnel passes. Affinity masks determine the link properties that a device must check. If some bits in the mask are 1, at least one bit in an administrative group is 1, and the corresponding bit in the affinity must be 1. If the bits in the affinity are 0s, the corresponding bits in the administrative group cannot be 1.
You can use an affinity to control the nodes through which an automatic P2MP TE tunnel passes.
Run hop-limit hop-limit-value
A hop limit is set for an automatic P2MP TE tunnel.
The hop-limit command sets the maximum number of hops that each sub-LSP in an automatic P2MP TE tunnel supports.
Run tie-breaking { least-fill | most-fill | random }
A rule for selecting a route among multiple routes to the same destination is specified.
-
Run priority setup-priority [ hold-priority ]
The setup and holding priorities are set.
The setup priority of a tunnel must be no higher than its holding priority. A setup priority value must be greater than or equal to a holding priority value.
If resources are insufficient, setting the setup and holding priority values allows LSPs with lower priorities to release resources for establishing LSPs with higher priorities.
Run reoptimization [ frequency interval ]
Periodic re-optimization is enabled for an automatic P2MP TE tunnel.
Periodic re-optimization allows a P2MP TE tunnel to be automatically reestablished over a better path. After a better path to the same destination has been calculated for a certain reason, such as a cost change, a TE tunnel will be automatically reestablished, optimizing resources on a network.
Run lsp-tp outbound
Traffic policing is enabled for an automatic P2MP TE tunnel.
Physical links over which a P2MP TE tunnel is established transmit traffic of other TE tunnels, traffic of non-CR LSP traffic, or even IP traffic, in addition to TE tunnel traffic. To limit TE traffic within a configured bandwidth range, run the lsp-tp outbound command.
- (Optional) Run cspf disable
CSPF is disabled in the P2MP template.
In an inter-AS scenario, if a loose explicit path is configured in a P2MP template, you need to run this command only when NG MVPN triggers the establishment of a dynamic P2MP tunnel. You are advised not to run this command in other scenarios.
- Run commit
The configuration is committed.
(Optional) Configuring a P2MP TE Tunnel to Support Soft Preemption
Priorities and preemption are used to allow TE tunnels to be established preferentially to transmit important services, preventing resource competition during tunnel establishment.
Context
- Hard preemption: A CR-LSP with a higher setup priority can directly preempt resources assigned to a CR-LSP with a lower holding priority. Some traffic is dropped on the CR-LSP with a lower holding priority during the hard preemption process.
- Soft preemption: After a CR-LSP with a higher setup priority preempts bandwidth of a CR-LSP with a lower holding priority, the soft preemption function retains the CR-LSP with a lower holding priority for a specified period of time. If the ingress finds a better path for this CR-LSP after the time elapses, the ingress uses the make-before-break mechanism to reestablish the CR-LSP over the new path. If the ingress fails to find a better path after the time elapses, the CR-LSP goes down.
(Optional) Configuring the Reliability Enhancement Function for a P2MP Tunnel
Configure a function to improve service reliability as needed.
Context
To improve reliability of traffic transmitted along a P2MP tunnel, configure the following reliability enhancement functions as needed:
Rapid MPLS P2MP switching
With this function, if a device detects a fault in the active link, the device rapidly switches services to the standby link over which an MPLS P2MP tunnel is established, which improves service reliability.
Multicast load balancing on a trunk interface
Without this function, a device randomly selects a trunk member interface to forward multicast traffic. If this member interface fails, multicast traffic is interrupted. With this function, multicast traffic along a P2MP tunnel is balanced among all trunk member interfaces. This function helps improve service reliability and increase available bandwidth for multicast traffic.
MPLS P2MP load balancing
To enable P2MP load balancing globally, run the mpls p2mp force-loadbalance enable command. In a multicast scenario where load balancing is configured in the Eth-Trunk interface view, if a leaf node connected to the Eth-Trunk interface joins or quits the multicast model, packet loss occurs on the other leave nodes connected to the non-Eth-Trunk interfaces due to the model change. After the mpls p2mp force-loadbalance enable command is run, load balancing is forcibly enabled in the system view, therefore preventing packet loss.
WTR time for traffic to be switched from the MPLS P2MP FRR path to the primary path.
If the primary MPLS P2MP path fails, traffic on the forwarding plane is rapidly switched to the backup path. If the primary path recovers before MPLS P2MP convergence is complete on the downstream node, traffic is switched back to the primary path within the default WTR time. If only some entries are generated for the primary path within the period, some packets are dropped when traffic switches back to the primary path. To ensure that all entries are generated for the primary path during the switchback and prevent packet loss, you can flexibly set the WTR time for traffic to be switched from the MPLS P2MP FRR path to the primary path.
Procedure
- Configure rapid MPLS P2MP switching.
- Run system-view
The system view is displayed.
- Run mpls p2mp fast-switch enable
Rapid MPLS P2MP switching is enabled.
- Run commit
The configuration is committed.
- Run system-view
- Configure multicast load balancing on a trunk interface.
- Run system-view
The system view is displayed.
- Run interface eth-trunk trunk-id
The Eth-Trunk interface view is displayed.
- Run multicast p2mp load-balance enable
Multicast traffic load balancing among trunk member interfaces is enabled on the trunk interface that functions as an outbound interface of a P2MP tunnel.
- Run multicast p2mp root load-balance enable
Multicast traffic load balancing among trunk member interfaces is enabled on the trunk interface that resides on the P2MP tunnel's root node or ABR (ASBR) in the segmented NG MVPN scenario.
- (Optional) Run multicast p2mp root load-balance spmsi disable
Multicast traffic load balancing among trunk member interfaces is disabled on the root node of an S-PMSI tunnel.
- Run commit
The configuration is committed.
- Run system-view
- Configure MPLS P2MP load balancing.
- Run system-view
The system view is displayed.
- Run mpls p2mp force-loadbalance enable
MPLS P2MP load balancing is enabled globally.
- (Optional) Run multicast p2mp load-balance number load-balance_number
The number of trunk member interfaces that balance multicast traffic on a P2MP tunnel is set.
- Run commit
The configuration is committed.
- Run system-view
- Set the WTR time for traffic to be switched from the MPLS P2MP FRR path to the primary path.
- Run system-view
The system view is displayed.
- Run mpls p2mp frr-wtr time-value
The WTR time is set for traffic to be switched from the MPLS P2MP FRR path to the primary path.
- Run commit
The configuration is committed.
- Run system-view
Verifying the P2MP TE Tunnel Configuration
After configuring a P2MP TE tunnel, you can view information about the tunnel when it is in the Up state.
Procedure
- Run the display mpls te p2mp tunnel-interface command to check information about the P2MP TE tunnel interface on the ingress and all sub-LSPs.
- Run the display mpls te p2mp-template command to check P2MP TE tunnel template configurations and information about P2MP TE tunnels established using this template.
- Run the display mpls te leaf-list command to check information about the leaf list configured on the ingress.
- Run the display mpls te p2mp tunnel path command to check the path attributes of the P2MP TE tunnel.
- Run the display mpls multicast-lsp protocol p2mp-te command to check the sub-LSP status and MPLS forwarding entries, including the incoming label, outgoing label, inbound interface, and outbound interface of each sub-LSP.
- Run the display mpls multicast-lsp statistics protocol p2mp-te command to check statistics about sub-LSPs that pass through a local node.
- Run the display mpls rsvp-te p2mp lsp command to check information about RSVP signaling of the P2MP LSP.
- Run the display mpls rsvp-te p2mp session command to check statistics about the RSVP signaling packets sent and received over the P2MP TE tunnel.
- Run the display mpls rsvp-te p2mp statistics command to check information about RSVP signaling packets.
Follow-up Procedure
If errors occur in tunnel services, perform the following to quickly restore the services if no other workarounds are available.
- Run the reset mpls te auto-tunnel p2mp name tunnel-name command in the user view to reestablish the P2MP TE tunnel.
- Run the reset mpls rsvp-te p2mp sub-lsp tunnel-id lsp-id ingress-lsr-id sub-group-id sub-group-origin-id s2l-destination command in the user view to restart the sub-LSP of the P2MP TE tunnel.
Configuring BFD for P2MP TE
BFD for P2MP TE rapidly monitors P2MP TE tunnels, which helps speed up responses to faults and improve network reliability.
Usage Scenario
If P2MP TE tunnels are established to transmit NG MVPN and multicast VPLS services, BFD for P2MP TE can be configured to rapidly detect faults in P2MP TE tunnels, which improves network reliability. To configure BFD for P2MP TE, run the bfd enable command in the P2MP tunnel template view so that BFD sessions for P2MP TE can be automatically established while P2MP TE tunnels are being established.
Configuring P2MP TE FRR
P2MP TE fast reroute (FRR) provides local link protection for a P2MP TE tunnel. It establishes a bypass tunnel to protect sub-LSPs. If a link fails on the P2MP TE tunnel, traffic switches to the bypass tunnel within 50 milliseconds, which increases tunnel reliability.
Usage Scenario
P2MP TE FRR establishes a bypass tunnel to provide local link protection for the P2MP TE tunnel called the primary tunnel. The bypass tunnel is a P2P TE tunnel. The principles and concepts of P2MP TE FRR are similar to those of P2P TE FRR.
P2P and P2MP TE tunnels can share a bypass tunnel. Therefore, when planning bandwidth for the bypass tunnel, ensure that the bypass tunnel bandwidth is equal to the total bandwidth of the bound P2P and P2MP tunnels.
If traffic is switched to a P2MP FRR tunnel, the forwarding performance deteriorates temporarily, and the impact is removed after traffic switches back to the primary tunnel.
In NG MVPN scenarios, when P2MP TE FRR protection is used, the interface of a primary P2MP tunnel needs to be enabled to delay in going Up. The delay is related to the number of VPN multicast groups over the tunnel. For 1000 multicast groups, set the delay time to 30s. Increase the delay time if more multicast groups are configured.
Configuring Manual FRR for a Manually Configured P2MP TE Tunnel
Manual FRR can be configured on the tunnel interface of a manually configured P2MP TE tunnel.
Context
- Enable the P2MP TE FRR function on the tunnel interface of the primary tunnel (P2MP TE tunnel).
- Configure a bypass tunnel on the point of local repair (PLR) node and bind the bypass tunnel to the primary tunnel.
Manual P2MP TE FRR only applies to manual P2MP TE tunnels.
Configuring FRR for Automatic P2MP TE Tunnels
Configuring P2MP TE Auto FRR
Auto fast reroute (FRR) is a local protection mechanism in MPLS TE. Auto FRR deployment is easier than manual FRR deployment.
Usage Scenario
FRR protection is configured for networks requiring high reliability. If P2MP TE manual FRR is used (configured by following the steps in Configuring P2MP TE FRR), a lot of configurations are needed on a network with complex topology and a great number of links to be protected. In this situation, P2MP TE Auto FRR can be configured.
Unlike P2MP TE manual FRR, P2MP TE Auto FRR automatically creates a bypass tunnel that meets traffic requirements, which simplifies configurations.
An SRLG attribute is configured for a bypass tunnel.
If P2MP TE Auto FRR and an SRLG attribute are configured, the primary and bypass tunnels must be in different SRLGs. If these two tunnels are in the same SRLG, the bypass tunnel may fail to be established.
A bypass tunnel with bandwidth protection configured takes preference over that with non-bandwidth protection configured.
Pre-configuration Tasks
Before configuring P2MP TE Auto FRR, complete the following tasks:
Configure a primary P2MP TE tunnel.
Enable MPLS, MPLS TE, and RSVP-TE globally and in the physical interface view on each node along a bypass tunnel to be established. For configuration details, see Enabling MPLS TE and RSVP-TE.
(Optional) To protect bandwidth of the primary tunnel, set the physical link bandwidth for the bypass tunnel to be established. For configuration details, see (Optional) Configuring Link TE Attributes.
Enable CSPF on each node along the bypass tunnel to be established.
Enabling P2MP TE Auto FRR
Enabling P2MP TE Auto FRR on the ingress or a transit node of the primary tunnel is the prerequisite of configuring TE Auto FRR.
Context
- Configure the entire device and its interface when Auto FRR needs to be configured on most interfaces.
- Only configure a specified interface when Auto FRR needs to be configured only on a few interfaces.
Enabling the TE FRR and Configuring the AutoBypass Tunnel Attributes
After MPLS TE FRR is enabled on the ingress of a primary LSP, a bypass LSP is established automatically.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
- Run mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
If TE FRR bandwidth protection is needed, configure the bandwidth parameter in this command.
- (Optional) Run mpls te frr-switch degrade
The MPLS TE tunnel is enabled to mask the FRR function.
After TE FRR takes effect, traffic is switched to the bypass LSP when the primary LSP fails. If the bypass LSP is not the optimal path, traffic congestion easily occurs. To prevent traffic congestion, you can configure LDP to protect TE tunnels. To have the LDP protection function take effect, you need to run the mpls te frr-switch degrade command to enable the MPLS TE tunnel to mask the FRR function. After the command is run:
If the primary LSP is in the FRR-in-use state (that is, traffic has been switched to the bypass LSP), traffic cannot be switched to the primary LSP.
If HSB is configured for the tunnel and an HSB LSP is available, traffic is switched to the HSB LSP.
If no HSB LSP is available for the tunnel, the tunnel is unavailable, and traffic is switched to another tunnel like an LDP tunnel.
If no tunnels are available, traffic is interrupted.
- (Optional) Run mpls te bypass-attributes [ bandwidth bandwidth | priority setup-priority [ hold-priority ] ]
Attributes are set for the automatic bypass LSP.
The bandwidth attribute can only be set for the bypass LSP after the mpls te fast-reroute bandwidth command is run for the primary LSP.
The bypass LSP bandwidth cannot exceed the primary LSP bandwidth.
If no attributes are configured for an automatic bypass LSP, by default, the automatic bypass LSP uses the same bandwidth as that of the primary LSP.
The setup priority of a bypass LSP must be lower than or equal to the holding priority. These priorities cannot be higher than the corresponding priorities of the primary LSP.
If TE FRR is disabled, the bypass LSP attributes are automatically deleted.
- Run quit
Return to the system view.
- Run interface interface-type interface-number
The interface view of the link through which the primary LSP passes is displayed.
- (Optional) Run mpls te auto-frr attributes { bandwidth bandwidth | priority setup-priority [ hold-priority ] | hop-limit hop-limit-value }
Attributes are configured for the bypass LSP.
- Run quit
Return to the system view.
- (Optional) Configure affinities for the automatic bypass tunnel.
Affinities determine link attributes of an automatic bypass LSP. Affinities and a link administrative group attribute are used together to determine over which links the automatic bypass LSP can be established.
Perform either of the following configurations:
Set a hexadecimal number.
Run interface interface-type interface-number
The interface view of the link through which the bypass LSP passes is displayed.
Run mpls te link administrative group value
An administrative group attribute is specified.
(Optional) Run mpls te auto-frr attributes affinity property properties [ mask mask-value ] or mpls te auto-frr attributes affinity { include-all | include-any | exclude } bit-name &<1-32>
An affinity for a bypass LSP is configured.
Run quit
Return to the system view.
Run interface tunnel tunnel-number
The view of the primary tunnel interface is displayed.
Run mpls te bypass-attributes affinity property properties [ mask mask-value]
An affinity of the bypass LSP is configured.
Set an affinity name.
Naming an affinity makes the affinity easy to understand and maintain. Setting an affinity name is recommended.
Run path-constraint affinity-mapping
An affinity name mapping template is configured, and the template view is displayed.
Repeat this step on each node used to calculate the path over which an automatic bypass LSP is established. The affinity name configured on each node must match the mappings between affinity bits and names.
Run attribute affinity-name bit-sequence bit-number
A mapping between an affinity bit and name is configured.
There are 128 affinity bits in total. This step can be performed to configure one affinity bit. You can repeat this step to configure some or all affinity bits
Run quit
Return to the system view.
Run interface interface-type interface-number
The interface view of the link through which the bypass LSP passes is displayed.
Run mpls te link administrative group name bit-name &<1-32>
An administrative group attribute is specified.
Run quit
Return to the system view.
Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
Run mpls te bypass-attributes affinity { include-all | include-any | exclude } bit-name &<1-32>
An affinity of the bypass LSP is configured.
If an automatic bypass LSP that satisfies the specified affinity cannot be established, a node will bind a manual bypass LSP satisfying the specified affinity to the primary LSP.
- Run commit
The configuration is committed.
(Optional) Configuring Auto Bypass Tunnel Re-Optimization
Auto bypass tunnel re-optimization allows paths to be recalculated at certain intervals for an auto bypass tunnel. If an optimal path is recalculated, a new auto bypass tunnel will be set up over this optimal path. In this manner, network resources are optimized.
Context
Network changes often cause the changes in optimal paths. Auto bypass tunnel re-optimization allows the system to re-optimize an auto bypass tunnel if an optimal path to the same destination is found due to some reasons, such as the changes in the cost. In this manner, network resources are optimized.
This configuration task is invalid for LSPs in the FRR-in-use state.
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te auto-frr reoptimization [ frequency interval ]
Auto bypass tunnel re-optimization is enabled.
- (Optional) Run return
Return to the user view.
- (Optional) Run mpls te reoptimization [ auto-tunnel name tunnel-interface | tunnel tunnel-number ]
Manual re-optimization is enabled.
After you configure the automatic re-optimization in the MPLS view, you can return to the user view and run the mpls te reoptimization command to immediately re-optimize the tunnels on which the automatic re-optimization is enabled. After you perform the manual re-optimization, the timer of the automatic re-optimization is reset and counts again.
- Run commit
The configurations are committed.
Configuring DS-TE
This feature combines traditional TE tunnels with the DiffServ model to provide QoS guarantee based on service types.
Usage Scenario
A static CR-LSP is easy to configure. Labels are manually allocated, and no signaling protocol is used to exchange control packets. The setup of a static CR-LSP consumes only a few resources, and you do not need to configure an IGP TE extension or CSPF for the static CR-LSP. However, static CR-LSP application is quite limited. A static CR-LSP cannot dynamically adapt to network changes and is limited in applications.
- A single TE tunnel transmits various types of services in a non-VPN scenario.
- A single TE tunnel transmits various types of services in a VPN instance.
- A single TE tunnel transmits various types of services in multiple VPN instances.
- A single TE tunnel transmits various types of VPN and non-VPN services.
Traditional MPLS TE tunnels (non-standard DS-TE tunnels) cannot transmit services based on service types in compliance with the quality of service (QoS). For example, when a TE tunnel carries both voice and video flows, video flows may have more duplicate frames than voice flows. Therefore, video flows require higher drop precedence than the voice flows. The same drop precedence, however, is used for voice and video flows on MPLS TE tunnels.
To prevent services over a tunnel from interfere with each other, establish a tunnel for each type of service in a VPN instance or for each type of non-VPN service. This solution wastes resources because a large number of tunnels are established when many VPN instances carry various services.
In the preceding MPLS TE tunnel scenarios, the DS-TE tunnel solution is optimal. An edge node in a DS-TE domain classifies services and adds service type information in the EXP field in packets. A transit node merely checks the EXP field to select a proper PHB to forward packets.
A DS-TE tunnel classifies services and reserves resources for each type of services, which improves network resource use efficiency. A DS-TE tunnel carries a maximum of eight types of services.
- The IETF DS-TE tunnel configuration requires the ingress and egress hardware to support HQoS. The non-IETF DS-TE tunnel has no such a restriction.
- If the same type of service in multiple VPN instances is carried using the same CT of a DS-TE tunnel, the bandwidth of each type of service in each VPN instance can be set on an access CE to prevent services of the same type but different VPN instances from competing for resources.
- To prevent non-VPN services and VPN services from completing resources, you can configure DS-TE to carry VPN services only or configure the bandwidth for non-VPN services in DS-TE.
Pre-configuration Tasks
Before configuring DS-TE, complete the following tasks:
- Configure unicast static routes or an IGP to ensure the readability between LSRs at the network layer.
- Set an LSR ID on each LSR.
- Enable MPLS globally and on interfaces on all LSRs.
- Enable MPLS TE and RSVP-TE on all LSRs and their interfaces.
- Enable behavior aggregate (BA) traffic classification on each LSR interface along an LSP.
Configuring a DS-TE Mode
You can configure an MPLS TE tunnel to work a DS-TE mode, either IETF mode or non-IETF mode.
Context
Perform the following steps on each LSR in a DS-TE domain:
Procedure
- Run system-view
The system view is displayed.
- Run mpls
The MPLS view is displayed.
- Run mpls te ds-te mode { ietf | non-ietf }
A DS-TE mode is specified.
- Run commit
The configuration is committed.
Follow-up Procedure
The IETF mode and non-IETF mode can be switched between each other on the NetEngine 8000 F. Table 1-942 describes switching between DS-TE modes. The arrow symbol (—>) indicates "switched to."
Item |
Non-IETF—>IETF |
IETF—>Non-IETF |
---|---|---|
Changes in bandwidth constraint models |
N/A |
RDM—>N/A MAM—>N/A |
Bandwidth change |
BC0 bandwidth values remain. |
BC0 bandwidth values remain. |
TE-class mapping table |
If the TE-class mapping table is not configured, the default TE-Class mapping table is used. Otherwise, the configured one is used.
NOTE:
For information about the default TE-class mapping table, see Table 1-943. |
No TE-class mapping table is used.
|
Configuring a DS-TE Bandwidth Constraints Model
If CT bandwidth preemption is allowed, the Russian dolls model (RDM) is recommended to efficiently use bandwidth resources. If CT bandwidth preemption is not allowed, the MAM is recommended.
Configuring Link Bandwidth
You can configure link bandwidth to limit the bandwidth for a DS-TE tunnel.
Procedure
- Run system-view
The system view is displayed.
- Run interface interface-type interface-number
The view of the link outbound interface is displayed.
- Run mpls te bandwidth max-reservable-bandwidth max-bw-value
The maximum reservable link bandwidth is set.
- Run mpls te bandwidth { bc0 bc0-bw-value | bc1 bc1-bw-value | bc2 bc2-bw-value | bc3 bc3-bw-value | bc4 bc4-bw-value | bc5 bc5-bw-value | bc6 bc6-bw-value | bc7 bc7-bw-value }
The bandwidth values of bandwidth constraints (BCs) are set for a link.
- Run commit
The configuration is committed.
Follow-up Procedure
A distinct bandwidth constraints model determines a specific mapping between the maximum reservable link bandwidth and BC bandwidth:
- In the RDM: max-reservable-bandwidth ≥ bc0-bw-value ≥ bc1-bw-value ≥ bc2-bw-value ≥ bc3-bw-value ≥ bc4-bw-value ≥ bc5-bw-value ≥ bc6-bw-value ≥ bc7-bw-value
- In the MAM: max-reservable-bandwidth ≥ bc0-bw-value + bc1-bw-value + bc2-bw-value + bc3-bw-value + bc4-bw-value + bc5-bw-value + bc6-bw-value + bc7-bw-value
The Bandwidth Constraint (BC) bandwidth refers to the bandwidth constraints on a link, whereas the CT bandwidth refers to the bandwidth constraints of various types of service traffic on a DS-TE tunnel. The BCi bandwidth of a link must be greater than or equal to the sum (0 <= i <= 7) of all CTi bandwidth values of DS-TE tunnels passing through the link. For example, three LSPs with CT1 passing through a link has bandwidth values x, y, and z, respectively. The link interface BC1 bandwidth must be greater than or equal to the sum of x, y, and z.
Configuring a Tunnel Interfaces
Before creating a DS-TE tunnel, create a tunnel interface and configure tunnel attributes in the view of the tunnel interface.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
- (Optional) Run description text
The tunnel description is configured.
- (Optional) Run either of the following commands to assign an IP address to the tunnel interface:
To configure an IP address, run ip address ip-address { mask | mask-length } [ sub ]
The secondary IP address of the tunnel interface can be configured only after the primary IP address is configured.
- To configure the tunnel interface to borrow the IP address of another interface, run ip address unnumbered interface interface-type interface-number
To forward traffic, the tunnel interface must have an IP address. An MPLS TE tunnel, however, is unidirectional, and o peer address exists. Therefore, it is unnecessary to assign an IP address to a tunnel interface. A tunnel interface usually borrows the loopback address, which functions as the LSR ID of the local node.
- Run tunnel-protocol mpls te
MPLS TE is configured as a tunneling protocol.
- Run destination ip-address
- Run mpls te tunnel-id tunnel-id
A tunnel ID is set.
- Run mpls te signal-protocol { cr-static | rsvp-te }
A signaling protocol is configured for a tunnel.
- (Optional) Run mpls te priority setup-priority [ hold-priority ]
The tunnel priorities are set.
A smaller value indicates a higher priority.
The holding priority must be higher than or equal to the setup priority. If no holding priority is set, its value is the same as that setup priority. If the combination of the bandwidth and priorities is not listed in the TE class mapping table, LSPs cannot be established.
- Run commit
The configuration is committed.
Each time you change an MPLS TE parameter, run the commit command to commit the configuration.
Configuring an RSVP CR-LSP and Specifying Bandwidth Values
When configuring an RSVP CR-LSP and specifying its bandwidth values, ensure that the sum of CT bandwidth values does not exceed the sum of BC bandwidth values.
Procedure
- Configure IGP TE.
For detailed configurations, see Configuring IGP TE (IS-IS).
- Configure CSPF.
For configuration details, see Configuration CSPF.
- Configure bandwidth values for an MPLS TE tunnel.
Perform the following steps on the ingress of a tunnel:
For the same node, the sum of CTi bandwidth values must not exceed the BCi bandwidth values (0 <= i <= 7). CTi can use bandwidth resources only of BCi.
If the bandwidth required by the MPLS TE tunnel is higher than 28,630 kbit/s, the available bandwidth assigned to the tunnel may not be precise, but the tunnel can be established successfully.
- (Optional) Configure an explicit path.
To limit the path over which an MPLS TE tunnel is established, perform the following steps on the ingress of the tunnel:
(Optional) Configuring a TE-Class Mapping Table
Configuring the same TE-class mapping table on the whole DS-TE domain is recommended. Otherwise, LSPs may be incorrectly established.
Context
Skip this section if the non-IETF DS-TE mode is used.
In IETF DS-TE mode, plan a TE-class mapping table. Configuring the same TE-class mapping table on the whole DS-TE domain is recommended. Otherwise, LSPs may be incorrectly established.
Perform the following steps on each LSR in a DS-TE domain:
Procedure
- Run system-view
The system view is displayed.
- Run te-class-mapping
A TE-class mapping table is created, and the TE-class mapping view is displayed.
- Perform one or more commands to configure TE-classes:
- To configure TE-class 0, run te-class0 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 1, run te-class 1 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 2, run te-class 2 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 3, run te-class 3 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 4, run te-class 4 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 5, run te-class 5 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 6, run te-class 6 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
- To configure TE-class 7, run te-class 7 class-type { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority [ description description-info ]
Note the following information when you configure a TE-class mapping table:
- The TE-class mapping table is unique on each device.
- The TE-class mapping table takes effect globally. It takes effect on all DS-TE tunnels passing through the local LSR.
A TE-class refers to a combination of a CT and a priority, in the format of <CT, priority>. The priority is the priority of a CR-LSP in the TE-class mapping table, not the EXP value in the MPLS header. The priority value is an integer ranging from 0 to 7. The smaller the value, the higher the priority is.
When you create a CR-LSP, you can set the setup and holding priorities for it (see Configuring a Tunnel Interface) and CT bandwidth values (see Configuring an RSVP CR-LSP and Specifying Bandwidth Values).
A CR-LSP can be established only when both <CT, setup-priority> and <CT, holding-priority> exist in a TE-class mapping table. For example, the TE-class mapping table of a node contains only TE-Class [0] = <CT0, 6> and TE-Class [1] = <CT0, 7>, only can the following three types of CR-LSPs be successfully set up:- Class-Type = CT0, setup-priority = 6, holding-priority = 6
- Class-Type = CT0, setup-priority = 7, holding-priority = 6
- Class-Type = CT0, setup-priority = 7, holding-priority = 7
The combination of setup-priority = 6 and hold-priority = 7 does not exist because the setup priority cannot be higher than the holding priority on a CR-LSP.
- In a MAN model, a higher-class CT preempts bandwidth of the same CT, not bandwidth of different CTs.
- In the RDM module, CT bandwidth preemption is limited by priorities of CR-LSPs and matching BCs. Assumed that priorities of CR-LSPs are set to m and n and CT values are set to i and j. If 0 <= m < n <= 7 and 0 <= i < j <= 7, the following situations occur:
- CTi with priority m can preempt the bandwidth of CTi with priority n or of CTj with priority n.
- Total CTi bandwidth <= BCi bandwidth
In IETF DS-TE mode, if no TE-class mapping table is configured, a default TE-class mapping table is used. Table 1-943 describes the default TE-class mapping table.
After the TE-class mapping is configured, to change TE-class descriptions, run the { te-class0 | te-class1 | te-class2 | te-class3 | te-class4 | te-class5 | te-class6 | te-class7 } description description-info command.
- Run commit
The configuration is committed.
(Optional) Configuring CBTS
Service class can be set for packets that MPLS TE tunnels allow to pass through.
Context
When services recurse to multiple TE tunnels, the mpls te service-class command is run on the TE tunnel interface to set a service class so that a TE tunnel transmits services of a specified service class.
Procedure
- Run system-view
The system view is displayed.
- Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
- Run mpls te service-class { service-class & <1-8> | default }
A service class is set for packets that an MPLS TE tunnel allows to pass through.
This command is used only on the ingress of an MPLS TE tunnel.
If the mpls te service-class command is run repeatedly on a tunnel interface, the latest configuration overrides the previous one.
- Run commit
The configuration is committed.
Verifying the DS-TE Configuration
After configuring DS-TE, you can verify DS-TE information and CT information of a tunnel.
Procedure
- Run the display mpls te ds-te { summary | te-class-mapping [ default | config | verbose ] } command to check DS-TE information.
- Run the display mpls te te-class-tunnel { all | { ct0 | ct1 | ct2 | ct3 | ct4 | ct5 | ct6 | ct7 } priority priority } command to check information about the TE tunnel associated with TE-classes.
- Run the display interface tunnel interface-number command to check CT traffic information on a specified tunnel interface.
- Run the display ospf [ process-id ] mpls-te [ area area-id ] [ self-originated ] command to check OSPF TE information.
- Run either of the following commands to check the IS-IS TE status:
- display isis traffic-eng advertisements [ lsp-id | local ] [ level-1 | level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
- display isis traffic-eng statistics [ process-id | vpn-instance vpn-instance-name ]
Maintaining MPLS TE
This section describes how to delete MPLS TE information and debug MPLS TE.
Checking Connectivity of a TE Tunnel
The connectivity of a TE tunnel between the ingress and egress is checked.
Context
After configuring an MPLS TE tunnel, you can run the ping lsp command on the ingress of the TE tunnel to verify that the ping from the ingress to the egress is successful. If the ping fails, run the tracert lsp command to locate the fault.
Procedure
- Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r reply-mode | -s packet-size | -t time-out | -v ] * te tunnel tunnel-number [ hot-standby ] [ compatible-mode ]command to check the connectivity of a TE tunnel from the ingress
to the egress.
If hot-standby is configured, a hot-standby CR-LSP is checked.
- Run the tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-out | -s size ] * te tunnel tunnel-number [ hot-standby ] [ compatible-mode ] [detail ]command to check the nodes through which
data packets pass along a TE tunnel from the ingress to the egress.
If hot-standby is configured, a hot-standby CR-LSP is checked.
Checking a TE Tunnel Using NQA
After configuring MPLS TE, you can use Network Quality Analysis (NQA) to check the connectivity and jitters of a TE tunnel.
Checking Tunnel Error Information
If an RSVP-TE tunnel interface is Down, run display commands to view information about faults.
Context
- CSPF computation failures
- Errors that occurred when RSVP signaling was triggered
- Errors carried in received RSVP PathErr messages
Deleting RSVP-TE Statistics
Resetting the RSVP Process
Resetting the RSVP process triggers a node to re-establish all RSVP CR-LSPs or verify the RSVP process.
Deleting an Automatic Bypass Tunnel and Re-establishing a New One
If MPLS TE Auto FRR is enabled, a command is used to instruct a node to tear down an automatic bypass tunnel and reestablish a new one.
Loopback Detection for a Specified Static Bidirectional Co-Routed CR-LSP
Loopback detection for a specified static bidirectional co-routed CR-LSP locates faults if a few packets are dropped or bit errors occur on links along the CR-LSP.
Context
On a network with a static bidirectional co-routed CR-LSP used to transmit services, if a few packets are dropped or bit errors occur on links, no alarms indicating link or LSP failures are generated, which poses difficulties in locating the faults. To locate the faults, loopback detection can be enabled for the static bidirectional co-routed CR-LSP.
Procedure
- (Optional) In the MPLS view, run lsp-loopback autoclear period period-value
The timeout period is set, after which loopback detection for a static bidirectional co-routed LSP is automatically disabled.
- In the specified static bidirectional LSP transit view, run lsp-loopback start.
Loopback detection is enabled for the specified static bidirectional co-routed CR-LSP.
Loopback detection enables a transit node on the CR-LSP to loop traffic back to the ingress. A professional monitoring device connected to the ingress monitors data packets that the ingress sends and receives and checks whether a fault occurs on the link between the ingress and transit node. Figure 1-2237 illustrates the network on which loopback is enabled to monitor a static bidirectional co-routed CR-LSP.During loopback detection, a loop occurs, which adversely affects service transmission. After loopback detection is complete, immediately run the lsp-loopback stop command to disable loopback detection. If you do not manually disable loopback detection, loopback detection will be automatically disabled after the specified timeout period elapses.
- Perform one of the following operations to check the loopback status on a transit node:
- Run the display mpls te bidirectional command.
- View the MPLS_LSPM_1.3.6.1.4.1.2011.5.25.121.2.1.75 hwMplsLspLoopBack alarm that is generated after loopback detection is started.
- View the MPLS_LSPM_1.3.6.1.4.1.2011.5.25.121.2.1.76 hwMplsLspLoopBackClear alarm that is generated after loopback detection is stopped.
Enabling the Packet Loss-Free MPLS ECMP Switchback
The packet loss-free MPLS ECMP switchback can be enabled.
Configuration Examples for MPLS TE
This section provides MPLS TE configuration examples.
Example for Establishing a Static MPLS TE Tunnel
This section provides an example for configuring a static MPLS TE tunnel, which involves enabling MPLS TE, configuring the MPLS TE bandwidth, and setting up a static CR-LSP.
Networking Requirements
On the carrier network shown in Figure 1-2238, some devices have low routing and processing performance. The carrier wants to use an MPLS TE tunnel to transmit services. To meet this requirement, two static TE tunnels between LSRA and LSRC can be established to transmit traffic in opposite directions. A static TE tunnel is manually established, without using a dynamic signaling protocol or IGP routes, which consumes a few device resources and has low requirement on device performance.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an IP address for each interface and a loopback address to be used as an MPLS LSR ID on each node.
Configure the LSR ID and globally enable MPLS and MPLS TE on each node and interface.
Create a tunnel interface on the ingress and specify the IP address of the tunnel, tunnel protocol, destination address, tunnel ID, and the signaling protocol used to establish the tunnel.
- Configure a static CR-LSP associated with the tunnel and specify the following parameters on each type of node:
- Ingress: outgoing label and next-hop address
- Transit node: inbound interface name, next-hop address, and outgoing label
- Egress: incoming label and inbound interface name
The outgoing label of each node is the incoming label of the next node.
When running the static-cr-lsp ingress { tunnel-interface tunnel interface-number | tunnel-name } destination destination-address { nexthop next-hop-address | outgoing-interface interface-type interface-number } * out-label out-label command to configure the ingress of a CR-LSP, note that tunnel-name must be the same as the tunnel name created using the interface tunnel interface-number command. tunnel-name is a case-sensitive character string with no spaces. For example, the name of the tunnel created by using the interface Tunnel 20 command is Tunnel20. In this case, the parameter of the static CR-LSP on the ingress is Tunnel20. This restriction does not apply to transit nodes or egresses.
Data Preparation
To complete the configuration, you need the following data:
IP address of each interface
Tunnel interface names, tunnel interface IP addresses, destination addresses, tunnel IDs, and tunnel signaling protocol (CR-Static) on LSRA and LSRC
Next-hop address and outgoing label of the ingress on the static CR-LSP
Inbound interface name, next-hop address, and outgoing label of the transit node on the static CR-LSP
Inbound interface name of the egress on the static CR-LSP
Procedure
- Assign the IP address to each interface and configure a routing protocol.
Assign an IP address and a mask to each interface.
For configuration details, see Configuration Files in this section.
- Configure the basic MPLS functions and enable MPLS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
Repeat this step for LSRB and LSRC. For configuration details, see Configuration Files in this section.
- Configure an MPLS TE tunnel.
# Create the MPLS TE tunnel from LSRA to LSRC on LSRA.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] destination 3.3.3.3
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te signal-protocol cr-static
[*LSRA-Tunnel10] commit
[~LSRA-Tunnel10] quit
# Create the MPLS TE tunnel from LSRC to LSRA on LSRC.
[~LSRC] interface Tunnel 20
[*LSRC-Tunnel20] ip address unnumbered interface loopback 1
[*LSRC-Tunnel20] tunnel-protocol mpls te
[*LSRC-Tunnel20] destination 1.1.1.1
[*LSRC-Tunnel20] mpls te tunnel-id 200
[*LSRC-Tunnel20] mpls te signal-protocol cr-static
[*LSRC-Tunnel20] commit
[~LSRC-Tunnel20] quit
- Create a static CR-LSP from LSRA to LSRC.
# Configure LSRA as the ingress of the static CR-LSP.
[~LSRA] static-cr-lsp ingress tunnel-interface Tunnel 10 destination 3.3.3.3 nexthop 2.1.1.2 out-label 20
[*LSRA] commit
# Configure LSRB as the transit node of the static CR-LSP.
[~LSRB] static-cr-lsp transit Tunnel10 incoming-interface gigabitethernet 0/1/0 in-label 20 nexthop 3.2.1.2 out-label 30
[*LSRB] commit
# Configure LSRC as the egress of the static CR-LSP.
[~LSRC] static-cr-lsp egress Tunnel10 incoming-interface gigabitethernet 0/1/8 in-label 30
[*LSRC] commit
- Create a static CR-LSP from LSRC to LSRA.
# Configure LSRC as the ingress of the static CR-LSP.
[~LSRC] static-cr-lsp ingress tunnel-interface Tunnel 20 destination 1.1.1.1 nexthop 3.2.1.1 out-label 120
[*LSRC] commit
# Configure LSRB as the transit node of the static CR-LSP.
[~LSRB] static-cr-lsp transit Tunnel20 incoming-interface gigabitethernet 0/1/8 in-label 120 nexthop 2.1.1.1 out-label 130
[*LSRB] commit
# Configure LSRA as the egress of the static CR-LSP.
[~LSRA] static-cr-lsp egress Tunnel20 incoming-interface gigabitethernet 0/1/0 in-label 130
[*LSRA] commit
- Verify the configuration.
After completing the configuration, run the display interface tunnel command on LSRA. The command output shows that the status of the tunnel interface is Up.
Run the display mpls te tunnel command on each LSR to view the establishment status of the MPLS TE tunnel.
[~LSRA] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------ Ingress LsrId Destination LSPID In/Out Label R Tunnel-name ------------------------------------------------------------------------------ 1.1.1.1 3.3.3.3 1 --/20 I Tunnel10 - - - 130/-- E Tunnel20 ------------------------------------------------------------------------------ R: Role, I: Ingress, T: Transit, E: Egress
[~LSRB] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------ Ingress LsrId Destination LSPID In/Out Label R Tunnel-name ------------------------------------------------------------------------------ - - - 20/30 T Tunnel10 - - - 120/130 T Tunnel20 ------------------------------------------------------------------------------ R: Role, I: Ingress, T: Transit, E: Egress
[~LSRC] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------ Ingress LsrId Destination LSPID In/Out Label R Tunnel-name ------------------------------------------------------------------------------ - - - 30/-- E Tunnel10 3.3.3.3 1.1.1.1 1 --/120 I Tunnel20 ------------------------------------------------------------------------------ R: Role, I: Ingress, T: Transit, E: Egress
Run the display mpls lsp or display mpls static-cr-lsp command on each LSR to view the establishment status of the static CR-LSP.
# Display the configuration on LSRA.
[~LSRA] display mpls static-cr-lsp
TOTAL : 2 STATIC CRLSP(S)
UP : 2 STATIC CRLSP(S)
DOWN : 0 STATIC CRLSP(S)
Name FEC I/O Label I/O If Status
Tunnel10 3.3.3.3/32 NULL/20 -/GE0/1/0 Up
Tunnel20 -/- 130/NULL GE0/1/0/- Up
# Display the configuration on LSRB.
[~LSRB] display mpls static-cr-lsp
TOTAL : 2 STATIC CRLSP(S)
UP : 2 STATIC CRLSP(S)
DOWN : 0 STATIC CRLSP(S)
Name FEC I/O Label I/O If Status
Tunnel10 -/- 20/30 GE0/1/0/GE0/1/8 Up
Tunnel20 -/- 120/130 GE0/1/8/GE0/1/0 Up
# Display the configuration on LSRC.
[~LSRC] display mpls static-cr-lsp
TOTAL : 2 STATIC CRLSP(S)
UP : 2 STATIC CRLSP(S)
DOWN : 0 STATIC CRLSP(S)
Name FEC I/O Label I/O If Status
Tunnel20 1.1.1.1/32 NULL/120 -/GE0/1/8 Up
Tunnel10 -/- 30/NULL GE0/1/8/- Up
When the static CR-LSP is used to establish the MPLS TE tunnel, the packets on the transit node and the egress are forwarded directly based on the specified incoming and outgoing labels. Therefore, no FEC information is displayed on LSRB or LSRC.
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te # static-cr-lsp ingress tunnel-interface Tunnel10 destination 3.3.3.3 nexthop 2.1.1.2 out-label 20 # static-cr-lsp egress Tunnel20 incoming-interface GigabitEthernet0/1/0 in-label 130 # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.1 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te signal-protocol cr-static mpls te tunnel-id 100 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te # static-cr-lsp transit Tunnel10 incoming-interface GigabitEthernet0/1/0 in-label 20 nexthop 3.2.1.2 out-label 30 # static-cr-lsp transit Tunnel20 incoming-interface GigabitEthernet0/1/8 in-label 120 nexthop 2.1.1.1 out-label 130 # interface GigabitEthernet0/1/8 undo shutdown ip address 3.2.1.1 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.2 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te # static-cr-lsp ingress tunnel-interface Tunnel20 destination 1.1.1.1 nexthop 3.2.1.1 out-label 120 # static-cr-lsp egress Tunnel10 incoming-interface GigabitEthernet0/1/8 in-label 30 # interface GigabitEthernet0/1/8 undo shutdown ip address 3.2.1.2 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/0 undo shutdown # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.1 mpls te signal-protocol cr-static mpls te tunnel-id 200 # return
Example for Configuring a Static Bidirectional Co-routed CR-LSP
This section provides an example for configuring a static bidirectional co-routed CR-LSP, including how to enable MPLS TE, configure MPLS TE bandwidth attributes, and configure an MPLS TE tunnel.
Usage Scenario
Static bidirectional co-routed CR-LSPs are used to establish static bidirectional tunnel for services on an MPLS network.
On a network shown in Figure 1-2239, a static bidirectional co-routed CR-LSP originates from LSRA and terminates on LSRC. The links for the static bidirectional co-routed CR-LSP between LSRA and LSRC has 10 Mbit/s bandwidth.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface and configure a routing protocol.
Configure basic MPLS functions and enable MPLS TE.
Configure MPLS TE attributes for links.
Configure MPLS TE tunnels.
Configure the ingress, a transit node, and the egress for the static bidirectional co-routed CR-LSP.
Bind the tunnel interface configured on LSRC to the static bidirectional co-routed CR-LSP.
Data Preparation
To complete the configuration, you need the following data:
Tunnel interface's name and IP address, destination address, tunnel ID, and static CR-LSP signaling on LSRA and LSRC
Maximum reservable bandwidth and BC bandwidth of each link
Next-hop address and outgoing label on the ingress
Inbound interface, next-hop address, and outgoing label on the transit node
Inbound interface on the egress
Procedure
- Assign an IP address to each interface and configure a routing protocol.
# Assign an IP address and a mask to each interface and configure OSPF so that all LSRs are interconnected.
For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions and enable MPLS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
Repeat this step for LSRB and LSRC.
- Configure MPLS TE attributes for links.
# Configure the maximum reservable bandwidth and BC0 bandwidth for the link on the outbound interface of each LSR. The BC0 bandwidth of links must be greater than the tunnel bandwidth (10 Mbit/s).
# Configure LSRA.
[~LSRA] interface gigabitethernet 0/1/0 [~LSRA-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000 [*LSRA-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000 [*LSRA-GigabitEthernet0/1/0] commit [~LSRA-GigabitEthernet0/1/0] quit
# Configure LSRB.
[~LSRB] interface gigabitethernet 0/1/0 [~LSRB-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000 [*LSRB-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000 [*LSRB-GigabitEthernet0/1/0] quit [*LSRB] interface gigabitethernet 0/1/1 [*LSRB-GigabitEthernet0/1/1] mpls te bandwidth max-reservable-bandwidth 100000 [*LSRB-GigabitEthernet0/1/1] mpls te bandwidth bc0 100000 [*LSRB-GigabitEthernet0/1/1] commit [~LSRB-GigabitEthernet0/1/1] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 0/1/0 [*LSRC-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000 [*LSRC-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000 [*LSRC-GigabitEthernet0/1/0] commit [~LSRC-GigabitEthernet0/1/0] quit
- Configure MPLS TE tunnel interfaces.
# Create an MPLS TE tunnel from LSRA to LSRC.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] destination 3.3.3.3
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te signal-protocol cr-static
[*LSRA-Tunnel10] mpls te bidirectional
[*LSRA-Tunnel10] commit
[~LSRA-Tunnel10] quit
# Create an MPLS TE tunnel from LSRC to LSRA.
[~LSRC] interface Tunnel 20
[*LSRC-Tunnel20] ip address unnumbered interface loopback 1
[*LSRC-Tunnel20] tunnel-protocol mpls te
[*LSRC-Tunnel20] destination 1.1.1.1
[*LSRC-Tunnel20] mpls te tunnel-id 200
[*LSRC-Tunnel20] mpls te signal-protocol cr-static
[*LSRC-Tunnel20] commit
[~LSRC-Tunnel20] quit
- Configure the ingress, a transit node, and the egress for the static bidirectional co-routed CR-LSP.# Configure LSRA as the ingress.
[~LSRA] bidirectional static-cr-lsp ingress Tunnel 10 [*LSRA-bi-static-ingress-Tunnel10] forward nexthop 2.1.1.2 out-label 20 bandwidth ct0 10000 [*LSRA-bi-static-ingress-Tunnel10] backward in-label 20 [*LSRA-bi-static-ingress-Tunnel10] commit [~LSRA-bi-static-ingress-Tunnel10] quit
# Configure LSRB as a transit node.[~LSRB]bidirectional static-cr-lsp transit lsp1 [*LSRB-bi-static-transit-lsp1] forward in-label 20 nexthop 3.2.1.2 out-label 40 bandwidth ct0 10000 [*LSRB-bi-static-transit-lsp1] backward in-label 16 nexthop 2.1.1.1 out-label 20 bandwidth ct0 10000 [*LSRB-bi-static-transit-lsp1] commit [~LSRB-bi-static-transit-lsp1] quit
# Configure LSRC as the egress.[~LSRC] bidirectional static-cr-lsp egress Tunnel20 [*LSRC-bi-static-egress-Tunnel20] forward in-label 40 lsrid 1.1.1.1 tunnel-id 100 [*LSRC-bi-static-egress-Tunnel20] backward nexthop 3.2.1.1 out-label 16 bandwidth ct0 10000 [*LSRC-bi-static-egress-Tunnel20] commit [~LSRC-bi-static-egress-Tunnel20] quit
- Bind the static bidirectional co-routed CR-LSP to the tunnel interface on LSRC.
[~LSRC] interface Tunnel20 [~LSRC-Tunnel20] mpls te passive-tunnel [*LSRC-Tunnel20] mpls te binding bidirectional static-cr-lsp egress Tunnel20 [*LSRC-Tunnel20] commit [~LSRC-Tunnel20] quit
- Verify the configuration.
After completing the configuration, run the display interface tunnel command on LSRA. The command output shows that the tunnel interface is Up.
Run the display mpls te tunnel command on each LSR. The command output shows that MPLS TE tunnels have been established.
# Check the configuration on LSRA.
[~LSRA] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------- Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name ------------------------------------------------------------------------------- 1.1.1.1 3.3.3.3 0 --/20 I Tunnel10 20/-- ------------------------------------------------------------------------------- R: Role, I: Ingress, T: Transit, E: Egress
# Check the configuration on LSRB.
[~LSRB] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------- Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name ------------------------------------------------------------------------------- - - - 20/40 T lsp1 16/20 ------------------------------------------------------------------------------- R: Role, I: Ingress, T: Transit, E: Egress
# Check the configuration results on LSRC.
[~LSRC] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------- Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name ------------------------------------------------------------------------------- 1.1.1.1 3.3.3.3 - 40/-- E Tunnel20 --/16 ------------------------------------------------------------------------------- R: Role, I: Ingress, T: Transit, E: Egress
Run the display mpls te bidirectional static-cr-lsp command on each LSR to view information about the static bidirectional co-routed CR-LSP.
# Check the configuration on LSRA.
[~LSRA] display mpls te bidirectional static-cr-lsp TOTAL : 1 STATIC CRLSP(S) UP : 1 STATIC CRLSP(S) DOWN : 0 STATIC CRLSP(S) Name FEC I/O Label I/O If Status Tunnel10 3.3.3.3/32 NULL/20 -/GE0/1/0 20/NULL GE0/1/0/- Up
# Check the configuration on LSRB.
[~LSRB] display mpls te bidirectional static-cr-lsp TOTAL : 1 STATIC CRLSP(S) UP : 1 STATIC CRLSP(S) DOWN : 0 STATIC CRLSP(S) Name FEC I/O Label I/O If Status lsp1 -/32 20/40 GE0/1/0/GE0/1/8 16/20 GE0/1/8/GE0/1/0 Up
# Check the configuration on LSRC.
[~LSRC] display mpls te bidirectional static-cr-lsp TOTAL : 1 STATIC CRLSP(S) UP : 1 STATIC CRLSP(S) DOWN : 0 STATIC CRLSP(S) Name FEC I/O Label I/O If Status Tunnel20 1.1.1.1/32 40/NULL GE0/1/0/- NULL/16 -/GE0/1/0 Up
After completing the configuration, run the ping command on LSRA. The static bidirectional co-routed CR-LSP is reachable.[~LSRA] ping lsp -a 1.1.1.1 te Tunnel 10 LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel10 : 100 data bytes, press CTRL_C to break Reply from 3.3.3.3: bytes=100 Sequence=1 time = 56 ms Reply from 3.3.3.3: bytes=100 Sequence=2 time = 53 ms Reply from 3.3.3.3: bytes=100 Sequence=3 time = 3 ms Reply from 3.3.3.3: bytes=100 Sequence=4 time = 60 ms Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms --- FEC: RSVP IPV4 SESSION QUERY Tunnel10 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 3/35/60 ms
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te # bidirectional static-cr-lsp ingress Tunnel10 forward nexthop 2.1.1.2 out-label 20 bandwidth ct0 10000 backward in-label 20 # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te signal-protocol cr-static mpls te tunnel-id 100 mpls te bidirectional # ip route-static 2.2.2.2 255.255.255.255 2.1.1.2 ip route-static 3.3.3.3 255.255.255.255 2.1.1.2 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te # bidirectional static-cr-lsp transit lsp1 forward in-label 20 nexthop 3.2.1.2 out-label 40 bandwidth ct0 10000 backward in-label 16 nexthop 2.1.1.1 out-label 20 bandwidth ct0 10000 # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 # interface GigabitEthernet0/1/1 undo shutdown ip address 3.2.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # ip route-static 1.1.1.1 255.255.255.255 2.1.1.1 ip route-static 3.3.3.3 255.255.255.255 3.2.1.2 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te # bidirectional static-cr-lsp egress Tunnel20 forward in-label 40 lsrid 1.1.1.1 tunnel-id 100 backward nexthop 3.2.1.1 out-label 16 bandwidth ct0 10000 # interface GigabitEthernet0/1/0 undo shutdown ip address 3.2.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.1 mpls te signal-protocol cr-static mpls te tunnel-id 200 mpls te passive-tunnel mpls te binding bidirectional static-cr-lsp egress Tunnel20 # ip route-static 1.1.1.1 255.255.255.255 3.2.1.1 ip route-static 2.2.2.2 255.255.255.255 3.2.1.1 # return
Example for Configuring an Associated Bidirectional Static CR-LSP
This section provides an example for configuring an associated bidirectional static CR-LSP.
Networking Requirements
In Figure 1-2240, a forward static CR-LSP is established along the path PE1 -> PE2, and a reverse static CR-LSP is established along the path PE2 -> PE1. To allow a traffic switchover to be performed on both CR-LSPs, bind the two static CR-LSPs to each other to form an associated bidirectional static CR-LSP.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address and its mask to every interface and configure a loopback interface address as an LSR ID on every node.
Configure a forward static CR-LSP and a reverse static CR-LSP.
Bind the forward and reverse static CR-LSPs to each other.
Data Preparation
In this example, a forward static CR-LSP is established along the path PE1 -> PE2, and a reverse static CR-LSP is established along the path PE2 -> PE1.
Device Name |
Parameter |
Value |
---|---|---|
PE1 |
Number of a tunnel interface on the forward CR-LSP |
Tunnel10 |
Tunnel ID of the forward CR-LSP |
100 |
|
Outgoing label of the forward CR-LSP |
20 |
|
Name of the reverse CR-LSP |
Tunnel20 |
|
Incoming label of the reverse CR-LSP |
130 |
|
P |
Name of the forward CR-LSP |
Tunnel10 |
Incoming label of the forward CR-LSP |
20 |
|
Outgoing label of the forward CR-LSP |
30 |
|
Name of the reverse CR-LSP |
Tunnel20 |
|
Incoming label of the reverse CR-LSP |
120 |
|
Outgoing label of the reverse CR-LSP |
130 |
|
PE2 |
Number of a tunnel interface on the reverse CR-LSP |
Tunnel20 |
Tunnel ID of the reverse CR-LSP |
200 |
|
Outgoing label of the reverse CR-LSP |
120 |
|
Name of the forward CR-LSP |
Tunnel10 |
|
Incoming label of the forward CR-LSP |
30 |
Procedure
- Assign an IP address and a mask to each interface.
Assign IP addresses and masks to interfaces. For configuration details, see Configuration Files in this section.
- Configure a forward static CR-LSP and a reverse static CR-LSP.
For configuration details, see Configuration Files in this section.
- Bind the forward and reverse static CR-LSPs to each other.
# Configure PE1.
[~PE1] interface Tunnel 10
[~PE1-Tunnel10] mpls te reverse-lsp protocol static lsp-name Tunnel20
[*PE1-Tunnel10] commit
# Configure PE2.
[~PE2] interface Tunnel 20
[~PE2-Tunnel20] mpls te reverse-lsp protocol static lsp-name Tunnel10
[*PE2-Tunnel20] commit
- Verify the configuration.
After completing the preceding configurations, run the display mpls te reverse-lsp verbose command on PE1 and PE2 to view reserve static CR-LSP information. The following example uses the command output on PE1.
[~PE1] display mpls te reverse-lsp verbose
------------------------------------------------------------------------------- LSP Information: STATIC LSP ------------------------------------------------------------------------------- Obverse Tunnel : Tunnel10 //Tunnel interface on the forward CR-LSP Reverse LSP Name : Tunnel20 //Name of the reverse CR-LSP Reverse LSP State : Up //Status of the reverse CR-LSP Incoming Label : 130 Incoming Interface : GE0/1/0
Configuration Files
PE1 configuration file
# sysname PE1 # mpls lsr-id 1.1.1.1 # mpls mpls te # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.252 mpls mpls te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te signal-protocol cr-static mpls te reverse-lsp protocol static lsp-name Tunnel20 mpls te tunnel-id 100 # static-cr-lsp ingress tunnel-interface Tunnel10 destination 3.3.3.3 nexthop 10.1.1.2 out-label 20 # static-cr-lsp egress Tunnel20 incoming-interface GigabitEthernet0/1/0 in-label 130 # return
P configuration file
# sysname P # mpls lsr-id 2.2.2.2 # mpls mpls te # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.252 mpls mpls te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.1 255.255.255.252 mpls mpls te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # static-cr-lsp transit Tunnel10 incoming-interface GigabitEthernet0/1/0 in-label 20 nexthop 10.2.1.2 out-label 30 # static-cr-lsp transit Tunnel20 incoming-interface GigabitEthernet0/1/8 in-label 120 nexthop 10.1.1.1 out-label 130 # return
PE2 configuration file
# sysname PE2 # mpls lsr-id 3.3.3.3 # mpls mpls te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.2 255.255.255.252 mpls mpls te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.1 mpls te signal-protocol cr-static mpls te reverse-lsp protocol static lsp-name Tunnel10 mpls te tunnel-id 200 # static-cr-lsp ingress tunnel-interface Tunnel20 destination 1.1.1.1 nexthop 10.2.1.1 out-label 120 # static-cr-lsp egress Tunnel10 incoming-interface GigabitEthernet0/1/8 in-label 30 # return
Example for Configuring an RSVP-TE Tunnel
Networking Requirements
On the network shown in Figure 1-2241, LSRA, LSRB, LSRC, and LSRD are level-2 routers that run IS-IS.
RSVP-TE is used to establish a TE tunnel with 20 Mbit/s bandwidth between LSRA and LSRD. The maximum reservable bandwidth for every link along the TE tunnel is 100 Mbit/s and the BC0 bandwidth is 100 Mbit/s.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface, including the loopback interface whose address is to be used as an LSR ID on each involved node.
Enable IS-IS globally; configure the network entity name; set the cost type to enable IS-IS TE; enable IS-IS on involved interfaces, including loopback interfaces.
Set an MPLS LSR ID for every LSR and enable MPLS, MPLS TE, RSVP-TE, and CSPF globally.
Enable MPLS, MPLS TE, and RSVP-TE on every interface.
Configure the maximum reservable link bandwidth and BC bandwidth on the outbound interfaces of each involved tunnel.
Create a tunnel interface on the ingress and configure the source and destination IP addresses for the tunnel, tunnel protocol, destination address, and tunnel bandwidth.
Data Preparation
To complete the configuration, you need the following data:
Origin AS number, IS-IS level, and area ID of every LSR
BC bandwidth and maximum reservable bandwidth on every link along the TE tunnel
Tunnel interface number, IP address, destination IP address, tunnel ID, and tunnel bandwidth
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and a mask to each interface according to Figure 1-2241. The configuration details are not provided.
- Configure IS-IS.
# Configure LSRA.
[~LSRA] isis 1
[*LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[*LSRA-isis-1] is-level level-2
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] isis enable 1
[*LSRA-GigabitEthernet0/1/0] quit
[*LSRA] interface loopback 1
[*LSRA-LoopBack1] isis enable 1
[*LSRA-LoopBack1] commit
[~LSRA-LoopBack1] quit
# Configure LSRB.
[~LSRB] isis 1
[*LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[*LSRB-isis-1] is-level level-2
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 0/1/0
[*LSRB-GigabitEthernet0/1/0] isis enable 1
[*LSRB-GigabitEthernet0/1/0] quit
[*LSRB] interface gigabitethernet 0/1/8
[*LSRB-GigabitEthernet0/1/8] isis enable 1
[*LSRB-GigabitEthernet0/1/8] quit
[*LSRB] interface loopback 1
[*LSRB-LoopBack1] isis enable 1
[*LSRB-LoopBack1] commit
[~LSRB-LoopBack1] quit
# Configure LSRC.
[~LSRC] isis 1
[*LSRC-isis-1] network-entity 00.0005.0000.0000.0003.00
[*LSRC-isis-1] is-level level-2
[*LSRC-isis-1] quit
[*LSRC] interface gigabitethernet 0/1/0
[*LSRC-GigabitEthernet0/1/0] isis enable 1
[*LSRC-GigabitEthernet0/1/0] quit
[*LSRC] interface gigabitethernet 0/1/8
[*LSRC-GigabitEthernet0/1/8] isis enable 1
[*LSRC-GigabitEthernet0/1/8] quit
[*LSRC] interface loopback 1
[*LSRC-LoopBack1] isis enable 1
[*LSRC-LoopBack1] commit
[~LSRC-LoopBack1] quit
# Configure LSRD.
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 00.0005.0000.0000.0004.00
[*LSRD-isis-1] is-level level-2
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 0/1/0
[*LSRD-GigabitEthernet0/1/0] isis enable 1
[*LSRD-GigabitEthernet0/1/0] quit
[*LSRD] interface loopback 1
[*LSRD-LoopBack1] isis enable 1
[*LSRD-LoopBack1] commit
[~LSRD-LoopBack1] quit
After completing the configurations, run the display ip routing-table command on each node. All nodes have learned routes from one another. The following example uses the command output on LSRA.
[~LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route ------------------------------------------------------------------------------ Routing Table : _public_ Destinations : 13 Routes : 13 Destination/Mask Proto Pre Cost Flags NextHop Interface 1.1.1.9/32 Direct 0 0 D 127.0.0.1 LoopBack1 2.2.2.9/32 ISIS-L2 15 10 D 10.1.1.2 GigabitEthernet0/1/0 3.3.3.9/32 ISIS-L2 15 20 D 10.1.1.2 GigabitEthernet0/1/0 4.4.4.9/32 ISIS-L2 15 30 D 10.1.1.2 GigabitEthernet0/1/0 10.1.1.0/24 Direct 0 0 D 10.1.1.1 GigabitEthernet0/1/0 10.1.1.1/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.1.1.255/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.2.1.0/24 ISIS-L2 15 20 D 10.1.1.2 GigabitEthernet0/1/0 10.3.1.0/24 ISIS-L2 15 30 D 10.1.1.2 GigabitEthernet0/1/0 127.0.0.0/8 Direct 0 0 D 127.0.0.1 InLoopBack0 127.0.0.1/32 Direct 0 0 D 127.0.0.1 InLoopBack0 127.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0 255.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Enable MPLS, MPLS TE, and RSVP-TE globally on each node and on all interfaces along the tunnel, and enable CSPF on the ingress.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] quit
[*LSRB] interface gigabitethernet 0/1/0
[*LSRB-GigabitEthernet0/1/0] mpls
[*LSRB-GigabitEthernet0/1/0] mpls te
[*LSRB-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/0] quit
[*LSRB] interface gigabitethernet 0/1/8
[*LSRB-GigabitEthernet0/1/8] mpls
[*LSRB-GigabitEthernet0/1/8] mpls te
[*LSRB-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/8] commit
[~LSRB-GigabitEthernet0/1/8] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.9
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls rsvp-te
[*LSRC-mpls] quit
[*LSRC] interface gigabitethernet 0/1/0
[*LSRC-GigabitEthernet0/1/0] mpls
[*LSRC-GigabitEthernet0/1/0] mpls te
[*LSRC-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRC-GigabitEthernet0/1/0] quit
[*LSRC] interface gigabitethernet 0/1/8
[*LSRC-GigabitEthernet0/1/8] mpls
[*LSRC-GigabitEthernet0/1/8] mpls te
[*LSRC-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRC-GigabitEthernet0/1/8] commit
[~LSRC-GigabitEthernet0/1/8] quit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.9
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] quit
[*LSRD] interface gigabitethernet 0/1/0
[*LSRD-GigabitEthernet0/1/0] mpls
[*LSRD-GigabitEthernet0/1/0] mpls te
[*LSRD-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/0] commit
[~LSRD-GigabitEthernet0/1/0] quit
- Configure IS-IS TE.
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
- Set MPLS TE bandwidth attributes for links.
# Set the maximum reservable bandwidth and BC0 bandwidth for a link on every interface along the TE tunnel.
# Configure LSRA.
[~LSRA] interface gigabitethernet 0/1/0
[~LSRA-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRA-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
# Configure LSRB.
[~LSRB] interface gigabitethernet 0/1/8
[~LSRB-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet0/1/8] commit
[~LSRB-GigabitEthernet0/1/8] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 0/1/0
[~LSRC-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRC-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*LSRC-GigabitEthernet0/1/0] commit
[~LSRC-GigabitEthernet0/1/0] quit
- Configure an MPLS TE tunnel interface.
# Create a tunnel interface on the ingress; configure the source and destination IP addresses for the tunnel, tunnel protocol, tunnel ID, and RSVP-TE signaling protocol; run the commit command to make the configurations take effect.
# Configure LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 4.4.4.9
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te bandwidth ct0 20000
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
- Verify the configuration.
After completing the configuration, run the display interface tunnel command on LSRA. The tunnel interface is Up.
[~LSRA] display interface tunnel 1
Tunnel1 current state : Up (ifindex: 29) Line protocol current state : Up Last line protocol up time : 2012-11-30 05:58:08 Description: Route Port,The Maximum Transmit Unit is 1500, Current BW: 20Mbps Internet Address is unnumbered, using address of LoopBack1(1.1.1.9/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 4.4.4.9 Tunnel up/down statistics 1 Tunnel protocol/transport MPLS/MPLS, ILM is available, primary tunnel id is 0x61, secondary tunnel id is 0x0 Current system time: 2012-11-30 05:58:10 300 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 126 packets output, 34204 bytes 0 output error 18 output drop Last 300 seconds input utility rate: 0.00% Last 300 seconds output utility rate: 0.00%
Run the display mpls te tunnel-interface command on LSRA. Detailed information about the tunnel interface is displayed.
[~LSRA] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : None BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.9:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : main Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
Run the display mpls te cspf tedb all command on LSRA. Link information in the TEDB is displayed.
[~LSRA] display mpls te cspf tedb all
Current Total Node Number: 4 Current Total Link Number: 6 Current Total SRLG Number: 0 Id Router-Id IGP Process-Id Area Link-Count 1 1.1.1.9 ISIS 1 Level-2 1 2 2.2.2.9 ISIS 1 Level-2 2 3 3.3.3.9 ISIS 1 Level-2 2 4 4.4.4.9 ISIS 1 Level-2 1
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.9 # mpls mpls te mpls te cspf mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0001.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 1.1.1.9 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te bandwidth ct0 20000 mpls te tunnel-id 1 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0002.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 mpls mpls te isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 isis enable 1 # return
LSRC configuration file
#
sysname LSRC # mpls lsr-id 3.3.3.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0003.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.2 255.255.255.0 mpls mpls te isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 isis enable 1 # return
LSRD configuration file
# sysname LSRD # mpls lsr-id 4.4.4.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0004.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.2 255.255.255.0 mpls mpls te isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 isis enable 1 # return
Example for Configuring an RSVP-TE over GRE Tunnel
Networking Requirements
On the network shown in Figure 1-2242, OSPF runs on DeviceB, DeviceC, and DeviceD, and a GRE tunnel is established between DeviceB and DeviceD. A TE tunnel with 10 Mbit/s bandwidth is required between DeviceA and DeviceE. The maximum reservable link bandwidth of the tunnel is 10 Mbit/s, as is the BC0 bandwidth.
Precautions
In this example, an RSVP-TE over GRE tunnel is configured, and the GRE tunnel interfaces cannot borrow the IP addresses of other interfaces. During configuration, you can enable an IGP on GRE tunnel interfaces and configure MPLS link attributes.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface, including the loopback interface whose address is to be used as an LSR ID on each involved node.
- Configure OSPF on DeviceB, DeviceC, and DeviceD, and establish a GRE tunnel between DeviceB and DeviceD.
Enable IS-IS globally, configure a network entity title (NET), specify the cost type, and enable IS-IS TE. Enable IS-IS on each interface (including loopback interfaces and GRE tunnel interfaces).
Configure MPLS LSR-IDs, and enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS CSPF globally.
Enable MPLS, MPLS TE, and MPLS RSVP-TE on each interface.
Configure the maximum reservable link bandwidth and BC bandwidth on the outbound interfaces of each involved tunnel.
Create a tunnel interface on the ingress and configure an IP address, tunnel protocol, destination IP address, and tunnel bandwidth.
Data Preparation
To complete the configuration, you need the following data:
IS-IS area ID, originating system ID, and IS-IS level of each node
Maximum bandwidth and maximum reservable bandwidth for each link along the tunnel
Tunnel interface number, IP address, destination IP address, tunnel ID, and tunnel bandwidth
Procedure
- Configure an IP address for each interface.
Assign an IP address and a mask to each interface according to Figure 1-2242. The configuration details are not provided.
- Establish a GRE tunnel between DeviceB and DeviceD.
# Configure DeviceB.
[~DeviceB] ospf 1 [*DeviceB-ospf-1] area 0.0.0.0 [*DeviceB-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0 [*DeviceB-ospf-1-area-0.0.0.0] network 172.16.1.0 0.0.0.255 [*DeviceB-ospf-1-area-0.0.0.0] quit [*DeviceB-ospf-1] quit [*DeviceB] interface LoopBack1 [*DeviceB-LoopBack1] binding tunnel gre [*DeviceB-LoopBack1] quit [*DeviceB] interface Tunnel10 [*DeviceB-Tunnel10] ip address 10.2.1.1 255.255.255.252 [*DeviceB-Tunnel10] tunnel-protocol gre [*DeviceB-Tunnel10] source 2.2.2.9 [*DeviceB-Tunnel10] destination 3.3.3.9 [*DeviceB-Tunnel10] quit [*DeviceB] commit
# Configure DeviceC.
[~DeviceC] ospf 1 [*DeviceC-ospf-1] area 0.0.0.0 [*DeviceC-ospf-1-area-0.0.0.0] network 172.16.1.0 0.0.0.255 [*DeviceC-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255 [*DeviceC-ospf-1-area-0.0.0.0] quit [*DeviceC-ospf-1] quit [*DeviceC] commit
# Configure DeviceD.
[~DeviceD] ospf 1 [*DeviceD-ospf-1] area 0.0.0.0 [*DeviceD-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0 [*DeviceD-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255 [*DeviceD-ospf-1-area-0.0.0.0] quit [*DeviceD-ospf-1] quit [*DeviceD] interface LoopBack1 [*DeviceD-LoopBack1] binding tunnel gre [*DeviceD-LoopBack1] quit [*DeviceD] interface Tunnel10 [*DeviceD-Tunnel10] ip address 10.2.1.2 255.255.255.252 [*DeviceD-Tunnel10] tunnel-protocol gre [*DeviceD-Tunnel10] source 3.3.3.9 [*DeviceD-Tunnel10] destination 2.2.2.9 [*DeviceD-Tunnel10] quit [*DeviceD] commit
After completing the configuration, run the display interface tunnel command. The command output shows that the tunnel interface is in the Up state. The following example uses the command output on DeviceB.
[~DeviceB] display interface tunnel 10 Tunnel10 current state : UP (ifindex: 30) Line protocol current state : UP Last line protocol up time : 2021-05-12 03:38:08 Description: Route Port,The Maximum Transmit Unit is 1500 Internet Address is 10.2.1.1/30 Encapsulation is TUNNEL, loopback not set Tunnel source 2.2.2.9 (LoopBack1), destination 3.3.3.9 Tunnel protocol/transport GRE/IP, key disabled keepalive disabled Checksumming of packets disabled Current system time: 2021-05-12 06:29:08 300 seconds input rate 0 bits/sec, 0 packets/sec 300 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds input rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 1834 packets input, 212950 bytes 0 input error 1837 packets output, 218381 bytes 0 output error Input: Unicast: 1834 packets, Multicast: 0 packets Output: Unicast: 1837 packets, Multicast: 0 packets Input bandwidth utilization : 0% Output bandwidth utilization : 0%
Run the display tunnel-info all command to check information about all tunnels. The following example uses the command output on DeviceB.
[~DeviceB] display tunnel-info all Tunnel ID Type Destination Status ---------------------------------------------------------------------------------------- 0x00000000050000001e gre 3.3.3.9 UP
- Configure IS-IS to advertise routes.
Note that IS-IS must also be enabled on the GRE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] isis 1 [*DeviceA-isis-1] network-entity 00.0005.0000.0000.0001.00 [*DeviceA-isis-1] is-level level-2 [*DeviceA-isis-1] quit [*DeviceA] interface gigabitethernet 0/1/0 [*DeviceA-GigabitEthernet0/1/0] isis enable 1 [*DeviceA-GigabitEthernet0/1/0] quit [*DeviceA] interface loopback 1 [*DeviceA-LoopBack1] isis enable 1 [*DeviceA-LoopBack1] commit [~DeviceA-LoopBack1] quit
# Configure DeviceB.
[~DeviceB] isis 1 [*DeviceB-isis-1] network-entity 00.0005.0000.0000.0002.00 [*DeviceB-isis-1] is-level level-2 [*DeviceB-isis-1] quit [*DeviceB] interface Tunnel10 [*DeviceB-Tunnel10] isis enable 1 [*DeviceB-Tunnel10] quit [*DeviceB] interface gigabitethernet 0/1/8 [*DeviceB-GigabitEthernet0/1/8] isis enable 1 [*DeviceB-GigabitEthernet0/1/8] quit [*DeviceB] interface loopback 1 [*DeviceB-LoopBack1] isis enable 1 [*DeviceB-LoopBack1] commit [~DeviceB-LoopBack1] quit
# Configure DeviceD.
[~DeviceD] isis 1 [*DeviceD-isis-1] network-entity 00.0005.0000.0000.0003.00 [*DeviceD-isis-1] is-level level-2 [*DeviceD-isis-1] quit [*DeviceD] interface Tunnel10 [*DeviceD-Tunnel10] isis enable 1 [*DeviceD-Tunnel10] quit [*DeviceD] interface gigabitethernet 0/1/8 [*DeviceD-GigabitEthernet0/1/8] isis enable 1 [*DeviceD-GigabitEthernet0/1/8] quit [*DeviceD] interface loopback 1 [*DeviceD-LoopBack1] isis enable 1 [*DeviceD-LoopBack1] commit [~DeviceD-LoopBack1] quit
# Configure DeviceE.
[~DeviceE] isis 1 [*DeviceE-isis-1] network-entity 00.0005.0000.0000.0004.00 [*DeviceE-isis-1] is-level level-2 [*DeviceE-isis-1] quit [*DeviceE] interface gigabitethernet 0/1/0 [*DeviceE-GigabitEthernet0/1/0] isis enable 1 [*DeviceE-GigabitEthernet0/1/0] quit [*DeviceE] interface loopback 1 [*DeviceE-LoopBack1] isis enable 1 [*DeviceE-LoopBack1] commit [~DeviceE-LoopBack1] quit
After completing the configuration, run the display ip routing-table command on each node. The command output shows that all the nodes have learned routes from each other. The following example uses the command output on DeviceA.
[~DeviceA] display ip routing-table Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route ------------------------------------------------------------------------------ Routing Table : _public_ Destinations : 13 Routes : 13 Destination/Mask Proto Pre Cost Flags NextHop Interface 1.1.1.9/32 Direct 0 0 D 127.0.0.1 LoopBack1 2.2.2.9/32 ISIS-L2 15 10 D 10.1.1.1 GigabitEthernet0/1/0 3.3.3.9/32 ISIS-L2 15 20 D 10.1.1.1 GigabitEthernet0/1/0 4.4.4.9/32 ISIS-L2 15 30 D 10.1.1.1 GigabitEthernet0/1/0 10.1.1.0/24 Direct 0 0 D 10.1.1.2 GigabitEthernet0/1/0 10.1.1.2/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.1.1.255/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.2.1.0/30 ISIS-L2 15 20 D 10.1.1.1 GigabitEthernet0/1/0 10.3.1.0/24 ISIS-L2 15 30 D 10.1.1.1 GigabitEthernet0/1/0 127.0.0.0/8 Direct 0 0 D 127.0.0.1 InLoopBack0 127.0.0.1/32 Direct 0 0 D 127.0.0.1 InLoopBack0 127.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0 255.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0
- Configure basic MPLS functions, and enable MPLS TE, RSVP-TE, and CSPF.
Enable MPLS, MPLS TE, and RSVP-TE globally on each node and on all interfaces along the tunnel, and enable CSPF on the ingress. Note that you also need to perform the related configurations on the GRE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] mpls lsr-id 1.1.1.9 [*DeviceA] mpls [*DeviceA-mpls] mpls te [*DeviceA-mpls] mpls rsvp-te [*DeviceA-mpls] mpls te cspf [*DeviceA-mpls] quit [*DeviceA] interface gigabitethernet 0/1/0 [*DeviceA-GigabitEthernet0/1/0] mpls [*DeviceA-GigabitEthernet0/1/0] mpls te [*DeviceA-GigabitEthernet0/1/0] mpls rsvp-te [*DeviceA-GigabitEthernet0/1/0] commit [~DeviceA-GigabitEthernet0/1/0] quit
# Configure DeviceB.
[~DeviceB] mpls lsr-id 2.2.2.9 [*DeviceB] mpls [*DeviceB-mpls] mpls te [*DeviceB-mpls] mpls rsvp-te [*DeviceB-mpls] quit [*DeviceB] interface Tunnel10 [*DeviceB-Tunnel10] mpls [*DeviceB-Tunnel10] mpls te [*DeviceB-Tunnel10] mpls rsvp-te [*DeviceB-Tunnel10] quit [*DeviceB] interface gigabitethernet 0/1/8 [*DeviceB-GigabitEthernet0/1/8] mpls [*DeviceB-GigabitEthernet0/1/8] mpls te [*DeviceB-GigabitEthernet0/1/8] mpls rsvp-te [*DeviceB-GigabitEthernet0/1/8] commit [~DeviceB-GigabitEthernet0/1/8] quit
# Configure DeviceD.
[~DeviceD] mpls lsr-id 3.3.3.9 [*DeviceD] mpls [*DeviceD-mpls] mpls te [*DeviceD-mpls] mpls rsvp-te [*DeviceD-mpls] quit [*DeviceD] interface Tunnel10 [*DeviceD-Tunnel10] mpls [*DeviceD-Tunnel10] mpls te [*DeviceD-Tunnel10] mpls rsvp-te [*DeviceD-Tunnel10] quit [*DeviceD] interface gigabitethernet 0/1/8 [*DeviceD-GigabitEthernet0/1/8] mpls [*DeviceD-GigabitEthernet0/1/8] mpls te [*DeviceD-GigabitEthernet0/1/8] mpls rsvp-te [*DeviceD-GigabitEthernet0/1/8] commit [~DeviceD-GigabitEthernet0/1/8] quit
# Configure DeviceE.
[~DeviceE] mpls lsr-id 4.4.4.9 [*DeviceE] mpls [*DeviceE-mpls] mpls te [*DeviceE-mpls] mpls rsvp-te [*DeviceE-mpls] mpls te cspf [*DeviceE-mpls] quit [*DeviceE] interface gigabitethernet 0/1/0 [*DeviceE-GigabitEthernet0/1/0] mpls [*DeviceE-GigabitEthernet0/1/0] mpls te [*DeviceE-GigabitEthernet0/1/0] mpls rsvp-te [*DeviceE-GigabitEthernet0/1/0] commit [~DeviceE-GigabitEthernet0/1/0] quit
- Configure IS-IS TE.
# Configure DeviceA.
[~DeviceA] isis 1 [~DeviceA-isis-1] cost-style wide [*DeviceA-isis-1] traffic-eng level-2 [*DeviceA-isis-1] commit [~DeviceA-isis-1] quit
# Configure DeviceB.
[~DeviceB] isis 1 [~DeviceB-isis-1] cost-style wide [*DeviceB-isis-1] traffic-eng level-2 [*DeviceB-isis-1] commit [~DeviceB-isis-1] quit
# Configure DeviceD.
[~DeviceD] isis 1 [~DeviceD-isis-1] cost-style wide [*DeviceD-isis-1] traffic-eng level-2 [*DeviceD-isis-1] commit [~DeviceD-isis-1] quit
# Configure DeviceE.
[~DeviceE] isis 1 [~DeviceE-isis-1] cost-style wide [*DeviceE-isis-1] traffic-eng level-2 [*DeviceE-isis-1] commit [~DeviceE-isis-1] quit
- Configure MPLS TE bandwidth attributes for links.
Configure the maximum reservable link bandwidth and BC0 bandwidth on the outbound interfaces of each involved tunnel. Note that you also need to perform related configurations on the GRE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] interface gigabitethernet 0/1/0 [~DeviceA-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceA-GigabitEthernet0/1/0] mpls te bandwidth bc0 10000 [*DeviceA-GigabitEthernet0/1/0] commit [~DeviceA-GigabitEthernet0/1/0] quit
# Configure DeviceB.
[~DeviceB] interface Tunnel10 [~DeviceB-Tunnel10] bandwidth 100000 [*DeviceB-Tunnel10] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceB-Tunnel10] mpls te bandwidth bc0 10000 [*DeviceB-Tunnel10] quit [*DeviceB] interface gigabitethernet 0/1/8 [*DeviceB-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceB-GigabitEthernet0/1/8] mpls te bandwidth bc0 10000 [*DeviceB-GigabitEthernet0/1/8] commit [~DeviceB-GigabitEthernet0/1/8] quit
# Configure DeviceD.
[~DeviceD] interface Tunnel10 [~DeviceD-Tunnel10] bandwidth 100000 [*DeviceD-Tunnel10] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceD-Tunnel10] mpls te bandwidth bc0 10000 [*DeviceD-Tunnel10] quit [~DeviceD] interface gigabitethernet 0/1/8 [~DeviceD-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceD-GigabitEthernet0/1/8] mpls te bandwidth bc0 10000 [*DeviceD-GigabitEthernet0/1/8] commit [~DeviceD-GigabitEthernet0/1/8] quit
# Configure DeviceE.
[~DeviceE] interface gigabitethernet 0/1/0 [~DeviceE-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 10000 [*DeviceE-GigabitEthernet0/1/0] mpls te bandwidth bc0 10000 [*DeviceE-GigabitEthernet0/1/0] commit [~DeviceE-GigabitEthernet0/1/0] quit
- Configure MPLS TE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] interface tunnel1 [*DeviceA-Tunnel1] ip address unnumbered interface loopback 1 [*DeviceA-Tunnel1] tunnel-protocol mpls te [*DeviceA-Tunnel1] destination 4.4.4.9 [*DeviceA-Tunnel1] mpls te tunnel-id 1 [*DeviceA-Tunnel1] mpls te bandwidth ct0 10000 [*DeviceA-Tunnel1] commit [~DeviceA-Tunnel1] quit
# Configure DeviceE.
[~DeviceE] interface tunnel1 [*DeviceE-Tunnel1] ip address unnumbered interface loopback 1 [*DeviceE-Tunnel1] tunnel-protocol mpls te [*DeviceE-Tunnel1] destination 1.1.1.9 [*DeviceE-Tunnel1] mpls te tunnel-id 1 [*DeviceE-Tunnel1] mpls te bandwidth ct0 10000 [*DeviceE-Tunnel1] commit [~DeviceE-Tunnel1] quit
- Verify the configuration.
After completing the configuration, run the display interface tunnel command. The command output shows that the tunnel interface is in the Up state. The following example uses the command output on DeviceA.
[~DeviceA] display interface tunnel 1 Tunnel1 current state : UP (ifindex: 27) Line protocol current state : UP Last line protocol up time : 2021-05-12 04:36:14 Description: Route Port,The Maximum Transmit Unit is 1500, Current BW: 10Mbps Internet Address is unnumbered, using address of LoopBack1(1.1.1.9/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 4.4.4.9 Tunnel up/down statistics 1 Tunnel ct0 bandwidth is 10000 Kbit/sec Tunnel protocol/transport MPLS/MPLS, ILM is available primary tunnel id is 0x2141, secondary tunnel id is 0x0 Current system time: 2021-05-12 06:38:42 0 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 0 packets output, 0 bytes 0 output error 0 output drop Last 300 seconds input utility rate: 0.00% Last 300 seconds output utility rate: 0.00%
Run the display mpls te tunnel-interface command. Detailed information about the tunnel interface is displayed. The following example uses the command output on DeviceA.
[~DeviceA] display mpls te tunnel-interface tunnel1 Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : - Current Collected BW: - Auto BW Freq : - Min BW : - Max BW : - Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: - Tunnel Group : Primary Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : None IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : None BestEffort : Disabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : - Entropy Label : None Associated Tunnel Group ID: - Associated Tunnel Group Type: - Auto BW Remain Time : - Reopt Remain Time : - Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : - Primary LSP ID : 1.1.1.9:232 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Disabled Reroute Flag : Enabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
Configuration Files
DeviceA configuration file
# sysname DeviceA # mpls lsr-id 1.1.1.9 # mpls mpls te mpls te cspf mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0001.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 1.1.1.9 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te bandwidth ct0 10000 mpls te tunnel-id 1 # return
DeviceB configuration file
# sysname DeviceB # mpls lsr-id 2.2.2.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0002.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 172.16.1.1 255.255.255.0 # interface GigabitEthernet0/1/8 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 isis enable 1 binding tunnel gre # interface Tunnel10 ip address 10.2.1.1 255.255.255.252 bandwidth 100000 tunnel-protocol gre source 2.2.2.9 destination 3.3.3.9 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # ospf 1 opaque-capability enable area 0.0.0.0 network 2.2.2.9 0.0.0.0 network 172.16.1.0 0.0.0.255 # return
DeviceC configuration file
# sysname DeviceC # interface GigabitEthernet0/1/0 undo shutdown ip address 172.16.1.2 255.255.255.0 # interface GigabitEthernet0/1/8 undo shutdown ip address 172.16.2.1 255.255.255.0 # ospf 1 opaque-capability enable area 0.0.0.0 network 172.16.1.0 0.0.0.255 network 172.16.2.0 0.0.0.255
DeviceD configuration file
# sysname DeviceD # mpls lsr-id 3.3.3.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0003.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 172.16.2.2 255.255.255.0 # interface GigabitEthernet0/1/8 undo shutdown ip address 10.3.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 isis enable 1 binding tunnel gre # interface Tunnel10 ip address 10.2.1.2 255.255.255.252 bandwidth 100000 tunnel-protocol gre source 3.3.3.9 destination 2.2.2.9 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # ospf 1 opaque-capability enable area 0.0.0.0 network 3.3.3.9 0.0.0.0 network 172.16.2.0 0.0.0.255 # return
- DeviceE configuration file
# sysname DeviceE # mpls lsr-id 4.4.4.9 # mpls mpls te mpls te cspf mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0004.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te bandwidth ct0 10000 mpls te tunnel-id 1 # return
Example for Configuring RSVP Authentication
Networking Requirements
On the network shown in Figure 1-2243, GE 0/1/0, GE 0/1/8, and GE 0/1/16 on LSRA and LSRB join Eth-Trunk1. An MPLS TE tunnel between LSRA and LSRC is established.
The handshake function, RSVP key authentication, and message window are configured for LSRA and LSRB. The handshake function allows LSRA and LSRB to perform RSVP key authentication. RSVP key authentication prevents forged packets from requesting network resource usage. The message window function prevents RSVP message mis-sequence.
Configuration Roadmap
The configuration roadmap is as follows:
Configure MPLS and establish an MPLS TE tunnel.
Configure RSVP authentication on interfaces.
Configure the handshake function on interfaces.
Set the size for the message window to allow interfaces to store 32 sequence numbers.
The window size to 32 is recommended. If the window size is too small, received RSVP messages outside the window are discarded, which terminates RSVP neighbor relationships.
Data Preparation
To complete the configuration, you need the following data:
OSPF process ID and area ID for every LSR
Password and key for RSVP authentication
RSVP message window size
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every interface as shown in Figure 1-2243. For configuration details, see Configuration Files in this section.
- Configure OSPF.
Configure OSPF to advertise every network segment route and host route. For configuration details, see Configuration Files in this section.
After completing the configurations, run the display ip routing-table command on every node. All nodes have learned routes from each other.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface eth-trunk 1
[*LSRA-Eth-Trunk1] mpls
[*LSRA-Eth-Trunk1] mpls te
[*LSRA-Eth-Trunk1] mpls rsvp-te
[*LSRA-Eth-Trunk1] commit
[~LSRA-Eth-Trunk1] quit
Repeat this step for LSRB and LSRC. For configuration details, see Configuration Files in this section.
- Configure OSPF TE.
# Configure LSRA.
[~LSRA] ospf 1
[~LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] commit
[~LSRA-ospf-1-area-0.0.0.0] quit
# Configure LSRB.
[~LSRB] ospf 1
[~LSRB-ospf-1] opaque-capability enable
[*LSRB-ospf-1] area 0
[*LSRB-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRB-ospf-1-area-0.0.0.0] commit
[~LSRB-ospf-1-area-0.0.0.0] quit
# Configure LSRC.
[~LSRC] ospf 1
[~LSRC-ospf-1] opaque-capability enable
[*LSRC-ospf-1] area 0
[*LSRC-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRC-ospf-1-area-0.0.0.0] commit
[~LSRC-ospf-1-area-0.0.0.0] quit
- Configure an MPLS TE tunnel.
# Configure the MPLS TE tunnel on LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 3.3.3.3
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
After completing the configuration, run the display interface tunnel command on LSRA. The tunnel interface is Up.
[~LSRA] display interface tunnel1
Tunnel1 current state : UP (ifindex: 18) Line protocol current state : UP Last line protocol up time : 2012-02-23 10:00:00 Description: Route Port,The Maximum Transmit Unit is 1500, Current BW: 0Mbps Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 3.3.3.3 Tunnel up/down statistics 1 Tunnel protocol/transport MPLS/MPLS, ILM is available, primary tunnel id is 0x161, secondary tunnel id is 0x0 Current system time: 2012-02-24 03:33:48 300 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 126 packets output, 34204 bytes 0 output error 18 output drop Last 300 seconds input utility rate: 0.00% Last 300 seconds output utility rate: 0.00%
- Configure RSVP authentication on MPLS TE interfaces of LSRA and LSRB.
# Configure LSRA.
[~LSRA] interface eth-trunk 1
[~LSRA-Eth-Trunk1] mpls rsvp-te authentication cipher YsHsjx_202206
[*LSRA-Eth-Trunk1] mpls rsvp-te authentication handshake
[*LSRA-Eth-Trunk1] mpls rsvp-te authentication window-size 32
[*LSRA-Eth-Trunk1] commit
# Configure LSRB.
[~LSRB] interface eth-trunk 1
[~LSRB-Eth-Trunk1] mpls rsvp-te authentication cipher YsHsjx_202206
[*LSRB-Eth-Trunk1] mpls rsvp-te authentication handshake
[*LSRB-Eth-Trunk1] mpls rsvp-te authentication window-size 32
[*LSRB-Eth-Trunk1] commit
- Verify the configuration.
Run the reset mpls rsvp-te and display interface tunnel commands in sequence on LSRA. The tunnel interface is Up.
Run the display mpls rsvp-te interface command on LSRA or LSRB. RSVP authentication information is displayed.
[~LSRA] display mpls rsvp-te interface eth-trunk 1
Interface: Eth-Trunk1 Interface Address: 10.1.1.1 Interface state: UP Interface Index: 0x15 Total-BW: 0 Used-BW: 0 Hello configured: NO Num of Neighbors: 1 SRefresh feature: DISABLE SRefresh Interval: 30 sec Mpls Mtu: 1500 Retransmit Interval: 500 msec Increment Value: 1 Authentication: ENABLE Challenge: ENABLE WindowSize: 32 Next Seq # to be sent: 486866945 12 Key ID: 0x0101051d0101 Bfd Enabled: -- Bfd Min-Tx: -- Bfd Min-Rx: -- Bfd Detect-Multi: -- RSVP instance name: RSVP0
Configuration Files
LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface Eth-Trunk1
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher O'W3[_\M"`!./a!1$H@GYA!!
mpls rsvp-te authentication handshake
mpls rsvp-te authentication window-size 32
#
interface GigabitEthernet0/1/0
undo shutdown
eth-trunk 1
#
interface GigabitEthernet0/1/8
undo shutdown
eth-trunk 1
#
interface GigabitEthernet0/1/16
undo shutdown
eth-trunk 1
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 1
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
interface Eth-Trunk1
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher O'W3[_\M"`!./a!1$H@GYA!!
mpls rsvp-te authentication handshake
mpls rsvp-te authentication window-size 32
#
interface GigabitEthernet0/1/0
undo shutdown
eth-trunk 1
#
interface GigabitEthernet0/1/8
undo shutdown
eth-trunk 1
#
interface GigabitEthernet0/1/16
undo shutdown
eth-trunk 1
#
interface GigabitEthernet0/1/24
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
interface GigabiEthernet0/1/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 3.3.3.3 0.0.0.0
network 10.2.1.0 0.0.0.255
#
return
Example for Configuring the IP-Prefix Tunnel Function to Automatically Establish MPLS TE Tunnels in a Batch
This section provides an example for configuring the IP-prefix tunnel function to automatically establish MPLS TE tunnels in a batch.
Networking Requirements
In Figure 1-2244, a customer expects to establish MPLS TE tunnels to form a full-mesh network and configure Auto FRR for each tunnel. Establishing tunnels one by one is laborious and complex. In this case, the IP-prefix tunnel function can be configured to automatically establish MPLS tunnels in a batch.
Device Name |
Interface Name |
IP Address and Mask |
---|---|---|
LSRA |
Loopback0 |
1.1.1.9/32 |
GE 0/1/0 |
10.1.1.1/24 |
|
GE 0/1/1 |
10.1.2.1/24 |
|
LSRB |
Loopback0 |
2.2.2.9/32 |
GE 0/1/0 |
10.1.1.2/24 |
|
GE 0/1/1 |
10.1.3.1/24 |
|
LSRC |
Loopback0 |
3.3.3.9/32 |
GE 0/1/1 |
10.1.2.2/24 |
|
GE 0/1/2 |
10.1.3.2/24 |
Configuration Roadmap
The configuration roadmap is as follows:
Configure IS-IS and IS-IS TE.
Enable MPLS TE and BFD globally on each device.
Configure an IP prefix list.
Configure a P2P TE tunnel template.
Configure the automatic primary tunnel function.
Data Preparation
To complete the configuration, you need the following data:
IP address of each interface on each node: values shown in Figure 1-2244
LSR ID of each node: loopback addresses shown in Figure 1-2244
IS-IS process number (1), IS-IS level (level-2), and network entity name of each node:
- LSRA: 10.0000.0000.0001.00
- LSRC: 10.0000.0000.0002.00
- LSRB: 10.0000.0000.0003.00
IP prefix name on each node: te-tunnel
P2P TE tunnel template name on each node: te-tunnel
Procedure
- Assign an IP address to each interface. For configuration details, see Configuration Files in this section.
- Configure IS-IS and IS-IS TE. For configuration details, see Configuration Files in this section.
- Enable MPLS TE and Auto FRR globally on each device. For configuration details, see Configuration Files in this section.
- Configure an IP prefix list.
# Configure LSRA.
[~LSRA] ip ip-prefix te-tunnel permit 2.2.2.9 32
[*LSRA] ip ip-prefix te-tunnel permit 3.3.3.9 32
[*LSRA] commit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For configuration details, see Configuration Files in this section.
- Configure a P2P TE tunnel template.
# Configure LSRA.
[~LSRA] mpls te p2p-template te-tunnel
[*LSRA-te-p2p-template-te-tunnel] bandwidth ct0 1000
[*LSRA-te-p2p-template-te-tunnel] fast-reroute
[*LSRA-te-p2p-template-te-tunnel] commit
[~LSRA-te-p2p-template-te-tunnel] quit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For configuration details, see Configuration Files in this section.
- Configure the automatic primary tunnel function.
# Configure LSRA.
[~LSRA] mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel
[*LSRA] commit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For configuration details, see Configuration Files in this section.
- Verify the configuration.# After completing the preceding configuration, run the display mpls te tunnel command on LSRA. The command output shows that MPLS TE tunnels have been established.
[~LSRA] display mpls te tunnel * means the LSP is detour LSP ------------------------------------------------------------------------------- Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name ------------------------------------------------------------------------------- 1.1.1.9 2.2.2.9 16 -/3 I AutoTunnel32769 2.2.2.9 1.1.1.9 10 3/- E AutoTunnel32769 1.1.1.9 3.3.3.9 17 -/3 I AutoTunnel32770 3.3.3.9 1.1.1.9 9 3/- E AutoTunnel32770 1.1.1.9 2.2.2.9 13 -/48060 I AutoBypassTunnel_1.1.1.9_2.2.2.9_32771 2.2.2.9 3.3.3.9 8 48061/3 T AutoBypassTunnel_2.2.2.9_3.3.3.9_32771 3.3.3.9 2.2.2.9 7 48060/3 T AutoBypassTunnel_3.3.3.9_2.2.2.9_32771 1.1.1.9 3.3.3.9 15 -/48060 I AutoBypassTunnel_1.1.1.9_3.3.3.9_32772 2.2.2.9 1.1.1.9 9 3/- E AutoBypassTunnel_2.2.2.9_1.1.1.9_32772 3.3.3.9 1.1.1.9 8 3/- E AutoBypassTunnel_3.3.3.9_1.1.1.9_32772 ------------------------------------------------------------------------------- R: Role, I: Ingress, T: Transit, E: Egress
Obtain a tunnel name, for example, AutoTunnel32769, displayed in the Tunnel-name column. Run the display mpls te tunnel-interface auto-primary-tunnel AutoTunnel32769 command to view detailed information about the specified tunnel.
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.9 # mpls mpls te mpls te auto-frr mpls rsvp-te mpls te cspf # mpls te p2p-template te-tunnel record-route label bandwidth ct0 1000 fast-reroute # mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel # isis 1 is-level level-2 cost-style wide network-entity 10.0000.0000.0001.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.2.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack0 ip address 1.1.1.9 255.255.255.255 isis enable 1 # ip ip-prefix te-tunnel index 10 permit 2.2.2.9 32 ip ip-prefix te-tunnel index 20 permit 3.3.3.9 32 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.9 # mpls mpls te mpls te auto-frr mpls rsvp-te mpls te cspf # mpls te p2p-template te-tunnel record-route label bandwidth ct0 1000 fast-reroute # mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel # isis 1 is-level level-2 cost-style wide network-entity 10.0000.0000.0002.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.3.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack0 ip address 2.2.2.9 255.255.255.255 isis enable 1 # ip ip-prefix te-tunnel index 10 permit 1.1.1.9 32 ip ip-prefix te-tunnel index 20 permit 3.3.3.9 32 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.9 # mpls mpls te mpls te auto-frr mpls rsvp-te mpls te cspf # mpls te p2p-template te-tunnel record-route label bandwidth ct0 1000 fast-reroute # mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel # isis 1 is-level level-2 cost-style wide network-entity 10.0000.0000.0003.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.2.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.3.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack0 ip address 3.3.3.9 255.255.255.255 isis enable 1 # ip ip-prefix te-tunnel index 10 permit 1.1.1.9 32 ip ip-prefix te-tunnel index 20 permit 2.2.2.9 32 # return
Example for Configuring the Affinity Attribute of an MPLS TE Tunnel
Networking Requirements
On the network shown in Figure 1-2245, the bandwidth of the link between LSRA and LSRB is 50 Mbit/s. The maximum reservable bandwidth of other links is 100 Mbit/s, and BC0 bandwidth is 100 Mbit/s.
Two tunnels named Tunnel1 and Tunnel2 from LSRA to LSRC are established on LSRA. Both tunnels require 40 Mbit/s of bandwidth. The combined bandwidth of these two tunnels is 80 Mbit/s, higher than the bandwidth of 50 Mbit/s provided by the shared link between LSRA and LSRB. In addition, Tunnel2 has a higher priority than Tunnel1, and preemption is enabled.
In this example, administrative group attributes, affinities, and masks for links are used to allow Tunnel1 and Tunnel2 on LSRA to use separate links between LSRB and LSRC.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an RSVP-TE tunnel. See "Configuration Roadmap" in Example for Configuring an RSVP-TE Tunnel.
Configure an administrative group attribute on an outbound interface of every LSR along each RSVP TE tunnel.
Configure the affinity and mask for each tunnel based on the administrative groups of links and networking requirements.
Set a priority value for each tunnel.
Data Preparation
To complete the configuration, you need the following data:
OSPF process ID and area ID for every LSR
Maximum reservable bandwidth and BC bandwidth for every link along each tunnel
Administrative groups for links between LSRA and LSRB and between LSRB and LSRC
Affinity and mask for each tunnel
Tunnel interface number, source and destination IP addresses, bandwidth, priority values, and RSVP-TE signaling protocol of the tunnel
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a loopback interface address as an LSR ID on every node according to Figure 1-2245.
For configuration details, see Configuration Files in this section.
- Configure an IGP.
Configure OSPF on every LSR to advertise every network segment route and host route.
For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions, enable MPLS TE, RSVP-TE, and OSPF TE on every LSR, and enable CSPF on the ingress.
# Configure basic MPLS functions and enable MPLS TE and RSVP-TE on every LSR.
The following example uses the command output on LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] quit
# Enable OSPF TE on every LSR. The following example uses the command output on LSRA.
[*LSRA] ospf
[*LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] quit
[*LSRA-ospf-1] quit
Repeat this step for LSRB and LSRC. For configuration details, see Configuration Files in this section.
# Enable CSPF on the ingress LSRA.
[*LSRA] mpls
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
- Configure MPLS TE attributes on the outbound interface of every LSR.
# Set the maximum reservable link bandwidth and BC0 bandwidth to 50 Mbit/s on LSRA.
[~LSRA] interface gigabitethernet 0/1/0
[~LSRA-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 50000
[*LSRA-GigabitEthernet0/1/0] mpls te bandwidth bc0 50000
# Set the administrative group to 0x10001 on LSRA.
[*LSRA-GigabitEthernet0/1/0] mpls te link administrative group 10001
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
# Configure MPLS TE attributes on LSRB.
[~LSRB] interface gigabitethernet 0/1/8
[~LSRB-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet0/1/8] mpls te link administrative group 10101
[*LSRB-GigabitEthernet0/1/8] quit
[*LSRB] interface gigabitethernet 0/1/16
[*LSRB-GigabitEthernet0/1/16] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet0/1/16] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet0/1/16] mpls te link administrative group 10011
[*LSRB-GigabitEthernet0/1/16] commit
[~LSRB-GigabitEthernet0/1/16] quit
After completing the configurations, run the display mpls te cspf tedb node command on LSRA. TEDB information contains maximum available and reservable bandwidth for every link, and the administrative group attribute in the Color field.
[~LSRA] display mpls te cspf tedb node
Router ID: 1.1.1.1 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 1 Link[1]: OSPF Router ID: 192.168.1.1 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.1.1 DR Address: 192.168.1.2 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10001 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 50000 (kbps) Maximum Reservable Bandwidth: 50000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 50000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 50000 (kbps), [1]: 50000 (kbps) [2]: 50000 (kbps), [3]: 50000 (kbps) [4]: 50000 (kbps), [5]: 50000 (kbps) [6]: 50000 (kbps), [7]: 50000 (kbps) Router ID: 2.2.2.2 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 3 Link[1]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.1.2 DR Address: 192.168.1.2 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps) Link[2]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.3 Interface IP Address: 192.168.2.1 DR Address: 192.168.2.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10101 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 100000 (kbps) Link[3]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.2 Interface IP Address: 192.168.3.1 DR Address: 192.168.3.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10011 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 100000 (kbps) Router ID: 3.3.3.3 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 2 Link[1]: OSPF Router ID: 4.4.4.4 Opaque LSA ID: 1.0.0.2 Interface IP Address: 192.168.2.2 DR Address: 192.168.2.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps) Link[2]: OSPF Router ID: 4.4.4.4 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.3.2 DR Address: 192.168.3.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps)
- Configure an MPLS TE tunnel.
# Configure a tunnel named Tunnel1 on LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 3.3.3.3
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te bandwidth ct0 40000
[*LSRA-Tunnel1] mpls te affinity property 10101 mask 11011
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
The default setup and hold priorities (lowest: 7) are used.
The mask of Tunnel1's affinity attribute is 0x11011. As such, the first two bits of the affinity attribute value need to be compared, so do the last two bits. In contrast, the third bit in the middle is ignored. Because the affinity value of Tunnel1 is 0x10101, this tunnel selects the link with the second and fourth bits of the administrative group attribute being 0 and at least one of the first and fifth bits being 1. According to the preceding rules, if the value of the administrative group attribute is 0x10001, 0x10000, 0x00001, 0x10101, 0x10100, or 0x00101, the value meets requirements. Tunnel1 then selects the link between GE 0/1/0 of LSRA (the administrative group value is 0x10001) and GE 0/1/8 of LSRB (the administrative group value is 0x10101).
After completing the configuration, run the display mpls te tunnel-interface command on LSRA. The tunnel status is displayed.
[~LSRA] display mpls te tunnel-interface
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.1 Egress LSR ID: 3.3.3.3 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : None BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.1:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
Run the display mpls te cspf tedb node command on LSRA. TEDB information contains bandwidth for every link.
[~LSRA] display mpls te cspf tedb node
Router ID: 1.1.1.1 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 1 Link[1]: OSPF Router ID: 192.168.1.1 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.1.1 DR Address: 192.168.1.2 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10001 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 50000 (kbps) Maximum Reservable Bandwidth: 50000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 50000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 50000 (kbps), [1]: 50000 (kbps) [2]: 50000 (kbps), [3]: 50000 (kbps) [4]: 50000 (kbps), [5]: 50000 (kbps) [6]: 50000 (kbps), [7]: 10000 (kbps) Router ID: 2.2.2.2 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 3 Link[1]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.1.2 DR Address: 192.168.1.2 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps) Link[2]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.3 Interface IP Address: 192.168.2.1 DR Address: 192.168.2.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10101 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 60000 (kbps) Link[3]: OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.2 Interface IP Address: 192.168.3.1 DR Address: 192.168.3.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x10011 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 100000 (kbps) Router ID: 3.3.3.3 IGP Type: OSPF Process ID: 1 IGP Area: 0 MPLS-TE Link Count: 2 Link[1]: OSPF Router ID: 4.4.4.4 Opaque LSA ID: 1.0.0.2 Interface IP Address: 192.168.2.2 DR Address: 192.168.2.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps) Link[2]: OSPF Router ID: 4.4.4.4 Opaque LSA ID: 1.0.0.1 Interface IP Address: 192.168.3.2 DR Address: 192.168.3.1 IGP Area: 0 Link Type: Multi-access Link Status: Active IGP Metric: 1 TE Metric: 1 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 0 (kbps) Maximum Reservable Bandwidth: 0 (kbps) Operational Mode of Router: TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 0 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 0 (kbps), [1]: 0 (kbps) [2]: 0 (kbps), [3]: 0 (kbps) [4]: 0 (kbps), [5]: 0 (kbps) [6]: 0 (kbps), [7]: 0 (kbps)
The BW Unreserved field indicates the remaining bandwidth reserved for tunnel links with various priorities. The command output shows that the value of [7] changes on the outbound interface of each node along the tunnel, indicating that bandwidth of 40 Mbit/s has been successfully reserved for a tunnel. The bandwidth information also matches the path of a tunnel. This proves that the affinity and mask match the administrative group of every link.
Alternatively, run the display mpls te tunnel diagnostic command to check the outbound interfaces of links along the tunnel on LSRB.
[~LSRB]display mpls te tunnel diagnostic
* means the LSP is detour LSP -------------------------------------------------------------------------------- LSP-Id Destination In/Out-If -------------------------------------------------------------------------------- 1.1.1.1:1:3 3.3.3.3 GE0/1/0/GE0/1/8 --------------------------------------------------------------------------------
# Configure a tunnel named Tunnel2 on LSRA.
[~LSRA] interface tunnel2
[*LSRA-Tunnel2] ip address unnumbered interface loopback 1
[*LSRA-Tunnel2] tunnel-protocol mpls te
[*LSRA-Tunnel2] destination 3.3.3.3
[*LSRA-Tunnel2] mpls te tunnel-id 101
[*LSRA-Tunnel2] mpls te bandwidth ct0 40000
[*LSRA-Tunnel2] mpls te affinity property 10011 mask 11101
[*LSRA-Tunnel2] mpls te priority 6
[*LSRA-Tunnel2] commit
[~LSRA-Tunnel2] quit
The mask of Tunnel2's affinity attribute is 0x11101. As such, the first three bits of the affinity attribute value need to be compared, so do the last bit. In contrast, the fourth bit is ignored. Because the affinity value of Tunnel2 is 0x10011, this tunnel selects the link with the second and third bits of the administrative group attribute being 0 and at least one of the first and fifth bits being 1. According to the preceding rules, if the value of the administrative group attribute is 0x10001, 0x10000, 0x00001, 0x10011, 0x10010, or 0x00011, the value meets requirements. Tunnel2 then selects the link between GE 0/1/0 of LSRA (the administrative group value is 0x10001) and GE 0/1/16 of LSRB (the administrative group value is 0x10011).
- Verify the configuration.
After completing the configurations, run the display interface tunnel or display mpls te tunnel-interface command on LSRA. The status of Tunnel1 is Down. This is because since the maximum reservable bandwidth is insufficient, Tunnel2 is of a higher priority and has preempted the bandwidth reserved for Tunnel1.
Run the display mpls te cspf tedb node command. TEDB information contains the bandwidth for every link, which indicates that Tunnel2 indeed passes through GE 0/1/16 of LSRB.
Alternatively, run the display mpls te tunnel diagnostic command to check outbound interfaces of links along the tunnel on LSRB.
[~LSRB] display mpls te tunnel diagnostic
* means the LSP is detour LSP -------------------------------------------------------------------------------- LSP-Id Destination In/Out-If -------------------------------------------------------------------------------- 1.1.1.1:1:4 3.3.3.3 GE0/1/0/GE0/1/16 --------------------------------------------------------------------------------
Configuration Files
LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10001
mpls te bandwidth max-reservable-bandwidth 50000
mpls te bandwidth bc0 50000
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 1
mpls te affinity property 10101 mask 11011
mpls te bandwidth ct0 40000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 101
mpls te priority 6
mpls te affinity property 10011 mask 11101
mpls te bandwidth ct0 40000
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 192.168.2.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10101
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 192.168.3.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10011
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 3.3.3.3 0.0.0.0
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 192.168.3.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
return
Example for Configuring an Inter-area Tunnel
Networking Requirements
Figure 1-2246 illustrates a network:
IS-IS runs on LSRA, LSRB, LSRC, LSRD, and LSRE.
LSRA and LSRE are level-1 routers.
LSRB and LSRD are level-1-2 routers.
LSRC is a level-2 router.
RSVP-TE is used to establish a TE tunnel between LSRA and LSRE over IS-IS areas. The bandwidth for the TE tunnel is 20 Mbit/s.
Both the maximum reservable bandwidth and BC0 bandwidth for every link along the TE tunnel are 100 Mbit/s.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to every interface and configure a loopback address that is used as an LSR ID on every LSR.
Enable IS-IS globally and enable IS-IS TE.
Configure a loose explicit path on which LSRB, LSRC, and LSRD functioning as area border routers (ABRs) are located.
Configure MPLS RSVP-TE.
Set bandwidth attributes for every outbound interface on every LSR along the TE tunnel.
Create a tunnel interface on the ingress and configure the source and destination IP addresses, protocol, ID, RSVP-TE signaling protocol, and bandwidth for the tunnel.
Data Preparation
To complete the configuration, you need the following data:
Origin AS number, IS-IS level, and area ID of every LSR
Maximum reservable bandwidth and BC bandwidth for every link along the TE tunnel
Tunnel interface number, source and destination addresses, ID, RSVP-TE signaling protocol, and bandwidth of the tunnel
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and a mask to each interface according to Figure 1-2246. The configuration details are not provided.
- Configure IS-IS.
# Configure LSRA.
[~LSRA] isis 1
[*LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[*LSRA-isis-1] is-level level-1
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] isis enable 1
[*LSRA-GigabitEthernet0/1/0] quit
[*LSRA] interface loopback 1
[*LSRA-LoopBack1] isis enable 1
[*LSRA-LoopBack1] commit
[~LSRA-LoopBack1] quit
# Configure LSRB.
[~LSRB] isis 1
[*LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[*LSRB-isis-1] is-level level-1-2
[*LSRB-isis-1] import-route isis level-2 into level-1
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 0/1/0
[*LSRB-GigabitEthernet0/1/0] isis enable 1
[*LSRB-GigabitEthernet0/1/0] quit
[*LSRB] interface gigabitethernet 0/1/8
[*LSRB-GigabitEthernet0/1/8] isis enable 1
[*LSRB-GigabitEthernet0/1/8] quit
[*LSRB] interface loopback 1
[*LSRB-LoopBack1] isis enable 1
[*LSRB-LoopBack1] commit
[~LSRB-LoopBack1] quit
# Configure LSRC.
[~LSRC] isis 1
[*LSRC-isis-1] network-entity 00.0006.0000.0000.0003.00
[*LSRC-isis-1] is-level level-2
[*LSRC-isis-1] quit
[*LSRC] interface gigabitethernet 0/1/0
[*LSRC-GigabitEthernet0/1/0] isis enable 1
[*LSRC-GigabitEthernet0/1/0] quit
[*LSRC] interface gigabitethernet 0/1/8
[*LSRC-GigabitEthernet0/1/8] isis enable 1
[*LSRC-GigabitEthernet0/1/8] quit
[*LSRC] interface loopback 1
[*LSRC-LoopBack1] isis enable 1
[*LSRC-LoopBack1] commit
[~LSRC-LoopBack1] quit
# Configure LSRD.
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 00.0007.0000.0000.0004.00
[*LSRD-isis-1] is-level level-1-2
[*LSRD-isis-1] import-route isis level-2 into level-1
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 0/1/0
[*LSRD-GigabitEthernet0/1/0] isis enable 1
[*LSRD-GigabitEthernet0/1/0] quit
[*LSRD] interface gigabitethernet 0/1/8
[*LSRD-GigabitEthernet0/1/8] isis enable 1
[*LSRD-GigabitEthernet0/1/8] quit
[*LSRD] interface loopback 1
[*LSRD-LoopBack1] isis enable 1
[*LSRD-LoopBack1] commit
[~LSRD-LoopBack1] quit
# Configure LSRE.
[~LSRE] isis 1
[*LSRE-isis-1] network-entity 00.0007.0000.0000.0005.00
[*LSRE-isis-1] is-level level-1
[*LSRE-isis-1] quit
[*LSRE] interface gigabitethernet 0/1/0
[*LSRE-GigabitEthernet0/1/0] isis enable 1
[*LSRE-GigabitEthernet0/1/0] quit
[*LSRE] interface loopback 1
[*LSRE-LoopBack1] isis enable 1
[*LSRE-LoopBack1] commit
[~LSRE-LoopBack1] quit
After completing the configurations, run the display ip routing-table command on every node. All nodes have learned routes from one another.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF on the ingress of the TE tunnel.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] quit
[*LSRB] interface gigabitethernet 0/1/0
[*LSRB-GigabitEthernet0/1/0] mpls
[*LSRB-GigabitEthernet0/1/0] mpls te
[*LSRB-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/0] quit
[*LSRB] interface gigabitethernet 0/1/8
[*LSRB-GigabitEthernet0/1/8] mpls
[*LSRB-GigabitEthernet0/1/8] mpls te
[*LSRB-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/8] commit
[~LSRB-GigabitEthernet0/1/8] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls rsvp-te
[*LSRC-mpls] quit
[*LSRC] interface gigabitethernet 0/1/0
[*LSRC-GigabitEthernet0/1/0] mpls
[*LSRC-GigabitEthernet0/1/0] mpls te
[*LSRC-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRC-GigabitEthernet0/1/0] quit
[*LSRC] interface gigabitethernet 0/1/8
[*LSRC-GigabitEthernet0/1/8] mpls
[*LSRC-GigabitEthernet0/1/8] mpls te
[*LSRC-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRC-GigabitEthernet0/1/8] commit
[~LSRC-GigabitEthernet0/1/8] quit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] quit
[*LSRD] interface gigabitethernet 0/1/0
[*LSRD-GigabitEthernet0/1/0] mpls
[*LSRD-GigabitEthernet0/1/0] mpls te
[*LSRD-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/0] quit
[*LSRD] interface gigabitethernet 0/1/8
[*LSRD-GigabitEthernet0/1/8] mpls
[*LSRD-GigabitEthernet0/1/8] mpls te
[*LSRD-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/8] commit
[~LSRD-GigabitEthernet0/1/8] quit
# Configure LSRE.
[~LSRE] mpls lsr-id 5.5.5.5
[*LSRE] mpls
[*LSRE-mpls] mpls te
[*LSRE-mpls] mpls rsvp-te
[*LSRE-mpls] quit
[*LSRE] interface gigabitethernet 0/1/0
[*LSRE-GigabitEthernet0/1/0] mpls
[*LSRE-GigabitEthernet0/1/0] mpls te
[*LSRE-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRE-GigabitEthernet0/1/0] commit
[~LSRE-GigabitEthernet0/1/0] quit
- Configure IS-IS TE.
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-1
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-1-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-1-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
# Configure LSRE.
[~LSRE] isis 1
[~LSRE-isis-1] cost-style wide
[*LSRE-isis-1] traffic-eng level-1
[*LSRE-isis-1] commit
[~LSRE-isis-1] quit
- Configure a loose explicit path.
[~LSRA] explicit-path atoe
[*LSRA-explicit-path-atoe] next hop 10.1.1.2 include loose
[*LSRA-explicit-path-atoe] next hop 10.2.1.2 include loose
[*LSRA-explicit-path-atoe] next hop 10.3.1.2 include loose
[*LSRA-explicit-path-atoe] next hop 10.4.1.2 include loose
[*LSRA-explicit-path-atoe] commit
- Configure MPLS TE attributes for links.
# Set the maximum reservable bandwidth and BC0 bandwidth for links on LSRA.
[~LSRA] interface gigabitethernet 0/1/0
[~LSRA-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRA-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
# Set the maximum bandwidth and reservable bandwidth for links on LSRB.
[~LSRB] interface gigabitethernet 0/1/8
[~LSRB-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet0/1/8] commit
[~LSRB-GigabitEthernet0/1/8] quit
# Set the maximum bandwidth and reservable bandwidth for links on LSRC.
[~LSRC] interface gigabitethernet 0/1/0
[~LSRC-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRC-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*LSRC-GigabitEthernet0/1/0] commit
[~LSRC-GigabitEthernet0/1/0] quit
# Set the maximum bandwidth and reservable bandwidth for links on LSRD.
[~LSRD] interface gigabitethernet 0/1/8
[~LSRD-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRD-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*LSRD-GigabitEthernet0/1/8] commit
[~LSRD-GigabitEthernet0/1/8] quit
- Configure an MPLS TE tunnel.
# Configure the MPLS TE tunnel on LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 5.5.5.5
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te bandwidth ct0 20000
[*LSRA-Tunnel1] mpls te path explicit-path atoe
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
- Verify the configuration.
After completing the configuration, run the display interface tunnel command on LSRA. The tunnel interface is Up.
[~LSRA] display interface Tunnel
Tunnel1 current state : UP (ifindex: 26) Line protocol current state : UP Last line protocol up time : 2012-03-08 04:52:40 Description: Route Port,The Maximum Transmit Unit is 1500 Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 5.5.5.5 Tunnel up/down statistics 1 Tunnel protocol/transport MPLS/MPLS, ILM is available, primary tunnel id is 0x97, secondary tunnel id is 0x0 Current system time: 2012-03-08 08:33:55 300 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 126 packets output, 34204 bytes 0 output error 18 output drop Last 300 seconds input utility rate: 0.00% Last 300 seconds output utility rate: 0.00%
# Run the display mpls te tunnel-interface command on LSRA. Detailed information about the TE tunnel interface is displayed.
[~LSRA] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.1 Egress LSR ID: 5.5.5.5 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : None BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.1:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : main Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : -
Configuration Files
LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path atoe
next hop 10.1.1.2 include loose
next hop 10.2.1.2 include loose
next hop 10.3.1.2 include loose
next hop 10.4.1.2 include loose
#
isis 1
is-level level-1
cost-style wide
traffic-eng level-1
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 5.5.5.5
mpls te tunnel-id 1
mpls te bandwidth ct0 20000
mpls te path explicit-path atoe
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-1-2
cost-style wide
traffic-eng level-1-2
import-route isis level-2 into level-1
network-entity 00.0005.0000.0000.0002.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0006.0000.0000.0003.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-1-2
cost-style wide
traffic-eng level-1-2
network-entity 00.0007.0000.0000.0004.00
import-route isis level-2 into level-1
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-1
cost-style wide
traffic-eng level-1
network-entity 00.0007.0000.0000.0005.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Example for Configuring the Threshold for Flooding Bandwidth Information
Networking Requirements
On the network shown in Figure 1-2247, an RSVP-TE tunnel between LSRA and LSRD is established. The bandwidth is 50 Mbit/s. The maximum reservable bandwidth and BC0 bandwidth for every link are 100 Mbit/s. The RDM is used.
The threshold for flooding bandwidth information is set to 20%. This reduces the number of attempts to flood bandwidth information and saves network resources. If the proportion of the bandwidth used or released by an MPLS TE tunnel to the available bandwidth in the TEDB is greater than or equal to 20%, an IGP floods the bandwidth information, and CSPF updates TEDB information.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an RSVP-TE tunnel. See "Configuration Roadmap" in Example for Configuring an RSVP-TE Tunnel.
Configure bandwidth and the threshold for flooding bandwidth information
Data Preparation
To complete the configuration, you need the following data:
OSPF process ID and area ID for every LSR
Maximum reservable bandwidth and BC bandwidth for every link along the TE tunnel
Tunnel interface number, source and destination addresses, ID, bandwidth for the tunnel, and RSVP-TE signaling protocol of the tunnel.
Threshold for flooding bandwidth information
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a loopback interface address as an LSR ID on every node according to Figure 1-2247.
For configuration details, see Configuration Files in this section.
- Configure an IGP.
Configure OSPF or IS-IS on every node to implement connectivity between them. IS-IS is used in this example.
For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Enable MPLS, MPLS TE, and RSVP-TE on every LSR and their interfaces along a tunnel, and enable CSPF in the system view of the ingress.
For configuration details, see Configuration Files in this section.
- Set MPLS TE bandwidth for links.
# Set the maximum reservable bandwidth and BC0 bandwidth for a link on every interface along the TE tunnel.
For configuration details, see Configuration Files in this section.
- Configure the threshold for flooding bandwidth information.
# Set the threshold for flooding bandwidth information to 20% on a physical interface on LSRA. If the proportion of the bandwidth used or released by an MPLS TE tunnel to the available bandwidth in the TEDB is greater than or equal to 20%, an IGP floods the bandwidth information, and CSPF updates TEDB information.
[~LSRA] interface gigabitethernet 0/1/0
[~LSRA-GigabitEthernet0/1/0] mpls te bandwidth change thresholds up 20
[*LSRA-GigabitEthernet0/1/0] mpls te bandwidth change thresholds down 20
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
Run the display mpls te cspf tedb command on LSRA. TEDB information is displayed.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9 IGP Type: ISIS Process Id: 1 Link[1]: ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00 Interface IP Address: 10.1.1.1 DR Address: 10.1.1.1 DR ISIS System ID: 0000.0000.0001.01 IGP Area: Level-2 Link Type: Multi-access Link Status: Active IGP Metric: 10 TE Metric: 10 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router : TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 100000 (kbps)
- Configure an MPLS TE tunnel.
# Configure a tunnel named Tunnel1 on LSRA.
[~LSRA]interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] destination 4.4.4.9
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] mpls te bandwidth ct0 10000
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
After completing the configuration, run the display mpls te tunnel-interface command on LSRA. The tunnel interface is Up.
[~LSRA] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : None BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.9:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
Run the display mpls te cspf tedb command on LSRA. Bandwidth information is unchanged.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9 IGP Type: ISIS Process Id: 1 Link[1]: ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00 Interface IP Address: 10.1.1.1 DR Address: 10.1.1.1 DR ISIS System ID: 0000.0000.0001.01 IGP Area: Level-2 Link Type: Multi-access Link Status: Active IGP Metric: 10 TE Metric: 10 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router : TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 100000 (kbps)
- Verify the configuration.
After completing the configuration, change the bandwidth to 20000 kbit/s.
[~LSRA] interface tunnel1
[~LSRA-Tunnel1] mpls te bandwidth ct0 20000
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
Run the display mpls te cspf tedb interface 10.1.1.1 command on LSRA. TEDB information shows that the TE tunnel named Tunnel1 has been reestablished successfully. Its bandwidth is 20 kbit/s, reaching 20%, the threshold for flooding bandwidth information. Therefore, CSPF TEDB information has been updated.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9 IGP Type: ISIS Process Id: 1 Link[1]: ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00 Interface IP Address: 10.1.1.1 DR Address: 10.1.1.1 DR ISIS System ID: 0000.0000.0001.01 IGP Area: Level-2 Link Type: Multi-access Link Status: Active IGP Metric: 10 TE Metric: 10 Color: 0x0 Bandwidth Allocation Model : - Maximum Link-Bandwidth: 100000 (kbps) Maximum Reservable Bandwidth: 100000 (kbps) Operational Mode of Router : TE Bandwidth Constraints: Local Overbooking Multiplier: BC[0]: 100000 (kbps) LOM[0]: 1 BW Unreserved: Class ID: [0]: 100000 (kbps), [1]: 100000 (kbps) [2]: 100000 (kbps), [3]: 100000 (kbps) [4]: 100000 (kbps), [5]: 100000 (kbps) [6]: 100000 (kbps), [7]: 80000 (kbps)
Configuration Files
LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls te bandwidth change thresholds up 20
mpls te bandwidth change thresholds down 20
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te tunnel-id 1
mpls te bandwidth ct0 20000
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0002.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls te bandwidth change thresholds up 20
mpls te bandwidth change thresholds down 20
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0003.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.9
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0004.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
Example for Configuring MPLS TE Manual FRR
Networking Requirements
On the network shown in Figure 1-2248, a primary tunnel is along the path LSRA -> LSRB -> LSRC -> LSRD. FRR is enabled on LSRB to protect traffic on the link between LSRB and LSRC.
A bypass CR-LSP is established over the path LSRB -> LSRE -> LSRC. LSRB is a PLR, and LSRC is an MP.
Explicit paths are used to establish the primary and bypass CR-LSPs. RSVP-TE is used as a signaling protocol.
Configuration Roadmap
The configuration roadmap is as follows:
Configure a primary CR-LSP and enable TE FRR on the tunnel interface of the primary CR-LSP.
Configure a bypass CR-LSP on the PLR (ingress) and specify the interface of the protected link.
Data Preparation
To complete the configuration, you need the following data:
IS-IS area ID, originating system ID, and IS-IS level of each node
Explicit paths for the primary and bypass CR-LSPs
Tunnel interface number, source and destination IP addresses, ID, and RSVP-TE signaling protocol for each of the primary and bypass CR-LSPs
Protected bandwidth and type and number of the interface on the protected link
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a loopback interface address as an LSR ID on every node shown in Figure 1-2248. For configuration details, see Configuration Files in this section.
- Configure an IGP.
Enable IS-IS on all nodes to advertise host routes. For configuration details, see Configuration Files in this section.
After completing the configurations, run the display ip routing-table command on every node. All nodes have learned routes from each other.
- Configure basic MPLS functions and enable MPLS TE, CSPF, RSVP-TE, and IS-IS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] quit
[*LSRA] isis
[*LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
The configurations of LSRB, LSRC, LSRD and LSRE are similar to the configuration of LSRA. For configuration details, see Configuration Files in this section. CSPF needs to be enabled only on LSRA and LSRB, which are ingress nodes of the primary and bypass CR-LSPs, respectively.
- Configure an MPLS TE tunnel on LSRA.
# Configure an explicit path for the primary CR-LSP.
[~LSRA] explicit-path pri-path
[*LSRA-explicit-path-pri-path] next hop 2.1.1.2
[*LSRA-explicit-path-pri-path] next hop 3.1.1.2
[*LSRA-explicit-path-pri-path] next hop 4.1.1.2
[*LSRA-explicit-path-pri-path] next hop 4.4.4.4
[*LSRA-explicit-path-pri-path] quit
# Configure the MPLS TE tunnel for the primary CR-LSP.
[*LSRA] interface tunnel 1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 4.4.4.4
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te path explicit-path pri-path
# Enable FRR.
[*LSRA-Tunnel1] mpls te fast-reroute
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
After completing the configuration, run the display interface tunnel command on LSRA. Tunnel1 is Up.
[~LSRA] display interface tunnel
Tunnel1 current state : UP (ifindex: 20) Line protocol current state : UP Last line protocol up time : 2011-05-31 06:30:58 Description: Route Port,The Maximum Transmit Unit is 1500, Current BW: 50Mbps Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 4.4.4.4 Tunnel up/down statistics 1 Tunnel protocol/transport MPLS/MPLS, ILM is available, primary tunnel id is 0x321, secondary tunnel id is 0x0 Current system time: 2011-05-31 07:32:31 300 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 126 packets output, 34204 bytes 0 output error 18 output drop Last 300 seconds input utility rate: 0.00% Last 300 seconds output utility rate: 0.00%
# Run the display mpls te tunnel-interface command on LSRA. Detailed information about the TE tunnel interface is displayed.
[~LSRA] display mpls te tunnel-interface
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.1 Egress LSR ID: 4.4.4.4 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : Hot-Standby BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : - Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.1:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : pri-path Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : - Backup LSP ID : 1.1.1.1:46945 IsBestEffortPath : No LSP State : UP LSP Type : Hot-Standby Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Enabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
- Configure a bypass CR-LSP on LSRB that functions as the PLR.
# Configure an explicit path for the bypass CR-LSP.
[~LSRB] explicit-path by-path
[*LSRB-explicit-path-by-path] next hop 3.2.1.2
[*LSRB-explicit-path-by-path] next hop 3.3.1.2
[*LSRB-explicit-path-by-path] next hop 3.3.3.3
[*LSRB-explicit-path-by-path] quit
# Configure the bypass CR-LSP.
[*LSRB] interface tunnel 3
[*LSRB-Tunnel3] ip address unnumbered interface loopback 1
[*LSRB-Tunnel3] tunnel-protocol mpls te
[*LSRB-Tunnel3] destination 3.3.3.3
[*LSRB-Tunnel3] mpls te tunnel-id 2
[*LSRB-Tunnel3] mpls te path explicit-path by-path
[*LSRB-Tunnel3] mpls te bypass-tunnel
# Bind the bypass CR-LSP to the interface of the protected link.
[*LSRB-Tunnel3] mpls te protected-interface gigabitethernet 0/1/8
[*LSRB-Tunnel3] commit
[~LSRB-Tunnel3] quit
After completing the configuration, run the display interface tunnel command on LSRB. The tunnel named Tunnel3 is Up.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. The bypass tunnel is bound to the outbound interface GE 0/1/8 and is not in use.
[~LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1 Tunnel-Name : Tunnel1 Tunnel Interface Name : Tunnel1 TunnelIndex : - Session ID : 1 LSP ID : 95 Lsr Role : Transit Ingress LSR ID : 1.1.1.1 Egress LSR ID : 4.4.4.4 In-Interface : GE0/1/0 Out-Interface : GE0/1/8 Sign-Protocol : RSVP TE Resv Style : SE IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0 IncludeAllAff : 0x0 ER-Hop Table Index : - AR-Hop Table Index: - C-Hop Table Index : - PrevTunnelIndexInSession: - NextTunnelIndexInSession: - PSB Handle : - Created Time : 2012/02/01 04:53:22 -------------------------------- DS-TE Information -------------------------------- Bandwidth Reserved Flag : Reserved CT0 Bandwidth(Kbit/sec) : 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0 Setup-Priority : 7 Hold-Priority : 7 -------------------------------- FRR Information -------------------------------- Primary LSP Info Bypass In Use : Not Used Bypass Tunnel Id : 1 BypassTunnel : Tunnel Index[Tunnel3], InnerLabel[16] Bypass Lsp ID : 8 FrrNextHop : 3.3.1.1 ReferAutoBypassHandle : - FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: - Bypass Attribute Setup Priority : 7 Hold Priority : 7 HopLimit : 32 Bandwidth : 0 IncludeAnyGroup : 0 ExcludeAnyGroup : 0 IncludeAllGroup : 0 Bypass Unbound Bandwidth Info(Kbit/sec) CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: - CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: - CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: - CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: - -------------------------------- BFD Information -------------------------------- NextSessionTunnelIndex : - PrevSessionTunnelIndex: - NextLspId : - PrevLspId : -
- Verify the configuration.
# Shut down the outbound interface of the protected link on the PLR.
[~LSRB] interface gigabitethernet 0/1/8
[~LSRB-GigabitEthernet0/1/8] shutdown
[*LSRB-GigabitEthernet0/1/8] commit
Run the display interface tunnel 1 command on LSRA. The tunnel interface of the primary CR-LSP is still Up.
Run the tracert lsp te tunnel1 command on LSRA. The path through which the primary CR-LSP passes is displayed.
[~LSRA] tracert lsp te tunnel1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break. TTL Replier Time Type Downstream 0 Ingress 2.1.1.2/[25 ] 1 2.1.1.2 3 Transit 3.2.1.2/[16 16 ] 2 3.2.1.2 4 Transit 3.3.1.2/[3 ] 3 3.3.1.2 4 Transit 4.1.1.2/[3 ] 4 4.4.4.4 3 Egress
The preceding command output shows that traffic has switched to the bypass CR-LSP.
If the display mpls te tunnel-interface command is run immediately after FRR switching has been performed, two CR-LSPs are both Up. This is because FRR uses the make-before-break mechanism to establish a bypass CR-LSP. The original CR-LSP will be deleted after a new CR-LSP has been established.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. The bypass CR-LSP is being used.
[~LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1 Tunnel-Name : Tunnel1 Tunnel Interface Name : Tunnel1 TunnelIndex : - Session ID : 1 LSP ID : 95 Lsr Role : Transit Ingress LSR ID : 1.1.1.1 Egress LSR ID : 4.4.4.4 In-Interface : GE0/1/0 Out-Interface : GE0/1/8 Sign-Protocol : RSVP TE Resv Style : SE IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0 IncludeAllAff : 0x0 ER-Hop Table Index : - AR-Hop Table Index: - C-Hop Table Index : - PrevTunnelIndexInSession: - NextTunnelIndexInSession: - PSB Handle : - Created Time : 2012/02/01 04:53:22 -------------------------------- DS-TE Information -------------------------------- Bandwidth Reserved Flag : Reserved CT0 Bandwidth(Kbit/sec) : 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0 Setup-Priority : 7 Hold-Priority : 7 -------------------------------- FRR Information -------------------------------- Primary LSP Info Bypass In Use : In Use Bypass Tunnel Id : 1 BypassTunnel : Tunnel Index[Tunnel3], InnerLabel[16] Bypass Lsp ID : 8 FrrNextHop : 3.3.3.3 ReferAutoBypassHandle : - FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: - Bypass Attribute Setup Priority : 7 Hold Priority : 7 HopLimit : 32 Bandwidth : 0 IncludeAnyGroup : 0 ExcludeAnyGroup : 0 IncludeAllGroup : 0 Bypass Unbound Bandwidth Info(Kbit/sec) CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: - CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: - CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: - CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: - -------------------------------- BFD Information -------------------------------- NextSessionTunnelIndex : - PrevSessionTunnelIndex: - NextLspId : - PrevLspId : -
# Start the outbound interface of the protected link on the PLR.
[~LSRB] interface gigabitethernet 0/1/8
[~LSRB-GigabitEthernet0/1/8] undo shutdown
[*LSRB-GigabitEthernet0/1/8] commit
Run the display interface tunnel1 command on LSRA. The tunnel interface of the primary CR-LSP is UP.
After specified period of time elapses, run the display mpls te tunnel name tunnel1 verbose command on LSRB. Tunnel1's Bypass In Use status is Not Used, indicating that traffic has switched back to GE 0/1/8.
Configuration Files
LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path pri-path
next hop 2.1.1.2
next hop 3.1.1.2
next hop 4.1.1.2
next hop 4.4.4.4
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 2.1.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 1
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path by-path
next hop 3.2.1.2
next hop 3.3.1.2
next hop 3.3.3.3
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0002.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 2.1.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 3.1.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 3.2.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 2
mpls te record-route
mpls te path explicit-path by-path
mpls te bypass-tunnel
mpls te protected-interface GigabitEthernet 0/1/8
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0003.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 4.1.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 3.1.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 3.3.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0004.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 4.1.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0005.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 3.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 3.3.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Example for Configuring MPLS TE Auto FRR
Networking Requirements
On the network shown in Figure 1-2249, a primary CR-LSP is established over an explicit path LSRA -> LSRB -> LSRC. Bypass CR-LSPs need to be established on the ingress LSRA and the transit node LSRB respectively. These bypass CR-LSPs are required to provide bandwidth protection. A node protection tunnel is a bypass tunnel that originates from LSRA's inbound interface, terminates at LSRC's outbound interface, and passes through the intermediate LSRB. A link protection tunnel is a bypass tunnel that originates from LSRB's outbound interface, terminates at LSRC's inbound interface, and passes through the intermediate LSRD or is a direct link between LSRB's outbound interface and LSRC's inbound interface.
Configuration Roadmap
The configuration roadmap is as follows:
Configure a primary CR-LSP, and enable TE FRR in the tunnel interface view and MPLS auto FRR in the MPLS view.
Set the protected bandwidth and priorities for the bypass CR-LSP in the tunnel interface view.
Data Preparation
To complete the configuration, you need the following data:
OSPF process ID and OSPF area ID for every node
Path for the primary CR-LSP
Tunnel interface number, source and destination IP addresses of the primary tunnel, tunnel ID, RSVP-TE signaling protocol, and tunnel bandwidth
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a loopback interface address as an LSR ID on every node shown in Figure 1-2249. For configuration details, see Configuration Files in this section.
- Configure OSPF to advertise every network segment route and host route.
Configure OSPF on all nodes to advertise host routes. For configuration details, see Configuration Files in this section.
After completing the configurations, run the display ip routing-table command on every node. All nodes have learned routes from one another.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[*LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/8
[*LSRA-GigabitEthernet0/1/8] mpls
[*LSRA-GigabitEthernet0/1/8] mpls te
[*LSRA-GigabitEthernet0/1/8] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/8] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] commit
[~LSRA-GigabitEthernet0/1/0] quit
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see Configuration Files in this section.
- Configure OSPF TE.
# Configure LSRA.
[~LSRA] ospf
[~LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] commit
[~LSRA-ospf-1-area-0.0.0.0] quit
[~LSRA-ospf-1] quit
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see Configuration Files in this section.
- Configure an explicit path for the primary CR-LSP.
[~LSRA] explicit-path master
[*LSRA-explicit-path-master] next hop 2.1.1.1
[*LSRA-explicit-path-master] next hop 3.1.1.1
[*LSRA-explicit-path-master] commit
- Configure TE Auto FRR.
# Configure LSRA.
[~LSRA] mpls
[~LSRA-mpls] mpls te auto-frr
[*LSRA-mpls] commit
# Configure LSRB.
[~LSRB] mpls
[~LSRB-mpls] mpls te auto-frr
[*LSRB-mpls] commit
- Configure a primary tunnel.
[~LSRA] interface tunnel2
[*LSRA-Tunnel2] ip address unnumbered interface loopBack1
[*LSRA-Tunnel2] tunnel-protocol mpls te
[*LSRA-Tunnel2] destination 3.3.3.3
[*LSRA-Tunnel2] mpls te tunnel-id 200
[*LSRA-Tunnel2] mpls te record-route label
[*LSRA-Tunnel2] mpls te path explicit-path master
[*LSRA-Tunnel2] mpls te bandwidth ct0 400
[*LSRA-Tunnel2] mpls te priority 4 3
[*LSRA-Tunnel2] mpls te fast-reroute bandwidth
[*LSRA-Tunnel2] mpls te bypass-attributes bandwidth 200 priority 5 4
[*LSRA-Tunnel2] commit
[~LSRA-Tunnel2] quit
- Verify the configuration.
Run the display mpls te tunnel name Tunnel2 verbose command on LSRA. Information about the primary and bypass CR-LSPs is displayed.
[~LSRA] display mpls te tunnel name Tunnel2 verbose
No : 1 Tunnel-Name : Tunnel2 Tunnel Interface Name : Tunnel2 TunnelIndex : - Session ID : 200 LSP ID : 164 LSR Role : Ingress Ingress LSR ID : 1.1.1.1 Egress LSR ID : 3.3.3.3 In-Interface : - Out-Interface : GE0/1/8 Sign-Protocol : RSVP TE Resv Style : SE IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0 IncludeAllAff : 0x0 ER-Hop Table Index : 1 AR-Hop Table Index: 674 C-Hop Table Index : 579 PrevTunnelIndexInSession: - NextTunnelIndexInSession: - PSB Handle : - Created Time : 2015-01-28 11:10:32 RSVP LSP Type : - -------------------------------- DS-TE Information -------------------------------- Bandwidth Reserved Flag : Reserved CT0 Bandwidth(Kbit/sec) : 400 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0 Setup-Priority : 4 Hold-Priority : 3 -------------------------------- FRR Information -------------------------------- Primary LSP Info Bypass In Use : Not Used Bypass Tunnel Id : 32866 BypassTunnel : Tunnel Index[AutoTunnel32866], InnerLabel[3] Bypass LSP ID : 165 FrrNextHop : 10.1.1.1 ReferAutoBypassHandle : - FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: - Bypass Attribute Setup Priority : 5 Hold Priority : 4 HopLimit : 32 Bandwidth : 200 IncludeAnyGroup : 0 ExcludeAnyGroup : 0 IncludeAllGroup : 0 Bypass Unbound Bandwidth Info(Kbit/sec) CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: - CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: - CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: - CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: - -------------------------------- BFD Information -------------------------------- NextSessionTunnelIndex : - PrevSessionTunnelIndex: - NextLspId : - PrevLspId : -
The primary CR-LSP has been bound to a bypass CR-LSP named AutoTunnel32866.
Run the display mpls te tunnel-interface auto-bypass-tunnel command. Detailed information about the automatic bypass CR-LSP is displayed. Its bandwidth, and setup and holding priorities are the same as bypass attributes in the primary CR-LSP information.
[~LSRA] display mpls te tunnel-interface auto-bypass-tunnel AutoTunnel32866
Tunnel Name : AutoTunnel32866 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 32866 Ingress LSR ID : 1.1.1.1 Egress LSR ID: 3.3.3.3 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 130 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : - Current Collected BW: - Auto BW Freq : - Min BW : - Max BW : - Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: - Tunnel Group : Primary Interfaces Protected: GigabitEthernet0/1/8 Excluded IP Address : 2.1.1.1 2.1.1.2 2.2.2.2 Referred LSP Count : 1 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Down Oam Status : None IPTN InLabel : - BackUp LSP Type : None BestEffort : Disabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Associated Tunnel Group ID: - Associated Tunnel Group Type: - Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.1:165 LSP State : UP LSP Type : Primary Setup Priority : 5 Hold Priority: 4 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 200 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 200 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : master Hop Limit: - Record Route : Enabled Record Label : Enabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Disabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : -
The automatic bypass CR-LSP protects traffic on GE 0/1/8, the outbound interface of the primary CR-LSP, not other three interfaces. The bandwidth is 200 kbit/s, and the setup and holding priority values are 5 and 4, respectively.
Run the display mpls te tunnel path command on LSRA. The bypass CR-LSP is providing both node and bandwidth protection for the primary CR-LSP.
[~LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel2 Lsp ID : 1.1.1.1 :200 :164 Hop Information Hop 0 2.1.1.1 Local-Protection available | bandwidth | node Hop 1 2.1.1.2 Label 32846 Hop 2 2.2.2.2 Label 32846 Hop 3 3.1.1.1 Local-Protection available | bandwidth Hop 4 3.1.1.2 Label 3 Hop 5 3.3.3.3 Label 3 Tunnel Interface Name : AutoTunnel32866 Lsp ID : 1.1.1.1 :32866 :165 Hop Information Hop 0 10.1.1.2 Hop 1 10.1.1.1 Label 3 Hop 2 3.3.3.3 Label 3
Run the display mpls te tunnel name Tunnel2 verbose command on the transit LSRB. Information about the primary and bypass CR-LSPs is displayed.
[~LSRB] display mpls te tunnel name Tunnel2 verbose
No : 1 Tunnel-Name : Tunnel2 Tunnel Interface Name : - TunnelIndex : - Session ID : 200 LSP ID : 164 LSR Role : Transit Ingress LSR ID : 1.1.1.1 Egress LSR ID : 3.3.3.3 In-Interface : GE0/1/16 Out-Interface : GE0/1/8 Sign-Protocol : RSVP TE Resv Style : SE IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0 IncludeAllAff : 0x0 ER-Hop Table Index : - AR-Hop Table Index: - C-Hop Table Index : - PrevTunnelIndexInSession: - NextTunnelIndexInSession: - PSB Handle : - Created Time : 2015-01-28 11:10:32 RSVP LSP Type : - -------------------------------- DS-TE Information -------------------------------- Bandwidth Reserved Flag : Reserved CT0 Bandwidth(Kbit/sec) : 400 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0 Setup-Priority : 4 Hold-Priority : 3 -------------------------------- FRR Information -------------------------------- Primary LSP Info Bypass In Use : Not Used Bypass Tunnel Id : 32865 BypassTunnel : Tunnel Index[AutoTunnel32865], InnerLabel[3] Bypass LSP ID : 6 FrrNextHop : 4.1.1.2 ReferAutoBypassHandle : - FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: - Bypass Attribute Setup Priority : 5 Hold Priority : 4 HopLimit : 32 Bandwidth : 200 IncludeAnyGroup : 0 ExcludeAnyGroup : 0 IncludeAllGroup : 0 Bypass Unbound Bandwidth Info(Kbit/sec) CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: - CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: - CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: - CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: - -------------------------------- BFD Information -------------------------------- NextSessionTunnelIndex : - PrevSessionTunnelIndex: - NextLspId : - PrevLspId : -
The primary CR-LSP has been bound to a bypass CR-LSP named AutoTunnel32865.
Run the display mpls te tunnel-interface auto-bypass-tunnel command. Detailed information about the automatic bypass CR-LSP is displayed. Its bandwidth, and setup and holding priorities are the same as bypass attributes in the primary CR-LSP information.
[~LSRB] display mpls te tunnel-interface auto-bypass-tunnel AutoTunnel32865
Tunnel Name : AutoTunnel32865 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 32865 Ingress LSR ID : 2.2.2.2 Egress LSR ID: 3.3.3.3 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 97 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : - Current Collected BW: - Auto BW Freq : - Min BW : - Max BW : - Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: - Tunnel Group : Primary Interfaces Protected: GigabitEthernet0/1/8 Excluded IP Address : 3.1.1.1 3.1.1.2 Referred LSP Count : 1 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Down Oam Status : None IPTN InLabel : - BackUp LSP Type : None BestEffort : Disabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : - Entropy Label : None Associated Tunnel Group ID: - Associated Tunnel Group Type: - Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 2.2.2.2:6 LSP State : UP LSP Type : Primary Setup Priority : 5 Hold Priority: 4 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 200 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 200 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Enabled Record Label : Enabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Disabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
The automatic bypass CR-LSP protects traffic on GE 0/1/8, the outbound interface of the primary CR-LSP. The bandwidth is 200 kbit/s, and the setup and holding priority values are 5 and 4, respectively.
Run the display mpls te tunnel path command on LSRB. Information about the path of both primary CR-LSP and automatic bypass CR-LSP is displayed.
[~LSRB] display mpls te tunnel path
Tunnel Interface Name : Tunnel2 Lsp ID : 1.1.1.1 :200 :164 Hop Information Hop 0 1.1.1.1 Hop 1 2.1.1.1 Local-Protection available | bandwidth | node Hop 2 2.1.1.2 Label 32846 Hop 3 2.2.2.2 Label 32846 Hop 4 3.1.1.1 Local-Protection available | bandwidth Hop 5 3.1.1.2 Label 3 Hop 6 3.3.3.3 Label 3 Tunnel Interface Name : AutoTunnel32865 Lsp ID : 2.2.2.2 :32865 :6 Hop Information Hop 0 3.2.1.1 Hop 1 3.2.1.2 Label 32839 Hop 2 4.4.4.4 Label 32839 Hop 3 4.1.1.1 Hop 4 4.1.1.2 Label 3 Hop 5 3.3.3.3 Label 3
Configuration Files
LSRA configuration file
#
sysname LSR A
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te auto-frr
mpls te cspf
mpls rsvp-te
#
explicit-path master
next hop 2.1.1.1
next hop 3.1.1.1
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.1.1.0 0.0.0.255
network 2.1.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 2.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 200
mpls te record-route label
mpls te priority 4 3
mpls te bandwidth ct0 400
mpls te path explicit-path master
mpls te fast-reroute bandwidth
mpls te bypass-attributes bandwidth 200 priority 5 4
#
return
LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls te auto-frr
mpls te cspf
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 3.1.1.0 0.0.0.255
network 3.2.1.0 0.0.0.255
network 2.1.1.0 0.0.0.255
network 2.2.2.2 0.0.0.0
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 3.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 3.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 2.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
return
LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.1.1.0 0.0.0.255
network 3.1.1.0 0.0.0.255
network 4.1.1.0 0.0.0.255
network 3.3.3.3 0.0.0.0
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 4.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 3.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
return
LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 3.2.1.0 0.0.0.255
network 4.1.1.0 0.0.0.255
network 4.4.4.4 0.0.0.0
#
interface GigabitEthernet0/1/8
undo shutdown
mpls
ip address 4.1.1.1 255.255.255.0
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 3.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
Return
Example for Configuring MPLS Detour FRR
This section provides an example for configuring MPLS detour FRR on an RSVP-TE tunnel.
Networking Requirements
Traffic engineering (TE) fast reroute (FRR) provides local link and node protection for MPLS TE tunnels. If a link or node fails, traffic is rapidly switched to a backup path, which minimizes traffic loss. TE FRR is working in facility or one-to-one backup mode. TE FRR in one-to-one backup mode is also called MPLS detour FRR. MPLS detour FRR automatically creates a detour LSP on each eligible node along primary CR-LSP to protect downstream links or nodes. This mode is easy to configure, eliminates manual network planning, and provides flexibility on a complex network.
Figure 1-2250 shows a primary RSVP-TE tunnel along the path LSRA -> LSRC -> LSRE. To improve tunnel reliability, MPLS detour FRR must be configured.
For information about how to configure TE FRR in facility backup mode, see Example for Configuring MPLS TE Manual FRR and Example for Configuring MPLS TE Auto FRR.
Configuration Notes
The facility backup and one-to-one backup modes are mutually exclusive on the same TE tunnel interface. If both modes are configured, the latest configured mode overrides the previous one.
The shared explicit (SE) style must be used for the MPLS detour FRR-enabled tunnel.
CSPF must be enabled on each node along both the primary and backup RSVP-TE tunnels.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an RSVP-TE tunnel.
Enable MPLS detour FRR on an RSVP-TE tunnel interface.
Data Preparation
To complete the configuration, you need the following data:
IP addresses of interfaces
IGP protocol (IS-IS), process ID (1), system ID (converted using loopback1 address), and IS-IS level (level-2)
LSR ID (loopback interface address) of every MPLS node
Tunnel interface name (Tunnel1), tunnel IP address (loopback interface IP address), tunnel ID (100), and destination IP address (5.5.5.5)
Procedure
- Assign an IP address and a mask to each interface.
Assign an IP address to each interface and create a loopback interface on each node. For configuration details, see Configuration Files in this section.
- Configure IS-IS to advertise the route to each network segment to which each interface is connected and to advertise the host route to each loopback address that is used as an LSR ID.
Configure IS-IS on each node to implement network layer connectivity. For configuration details, see Configuration Files in this section.
- Enable MPLS, MPLS TE, MPLS RSVP-TE, and CSPF globally on each node.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details, see Configuration Files in this section.
- Enable IGP TE on each node.
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details, see Configuration Files in this section.
- Enable RSVP-TE on interfaces of each node.
# Configure LSRA.
[~LSRA] interface gigabitethernet 0/1/1
[~LSRA-GigabitEthernet0/1/1] mpls
[*LSRA-GigabitEthernet0/1/1] mpls te
[*LSRA-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/1] commit
[~LSRA-GigabitEthernet0/1/1] quit
[~LSRA] interface gigabitethernet 0/1/2
[~LSRA-GigabitEthernet0/1/2] mpls
[*LSRA-GigabitEthernet0/1/2] mpls te
[*LSRA-GigabitEthernet0/1/2] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/2] commit
[~LSRA-GigabitEthernet0/1/2] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details, see Configuration Files in this section.
- Configure an RSVP-TE tunnel interface on LSRA (ingress).
# Configure LSRA.
[~LSRA] interface tunnel 1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] mpls te tunnel-id 100
[*LSRA-Tunnel1] destination 5.5.5.5
- Enable MPLS detour FRR on an RSVP-TE tunnel interface.
# Configure LSRA.
[*LSRA-Tunnel1] mpls te detour
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
- Verify the configuration.After completing the configurations, run the display mpls te tunnel command on LSRA to view detour LSP information.
[~LSRA] display mpls te tunnel
* means the LSP is detour LSP ------------------------------------------------------------------------------- Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name ------------------------------------------------------------------------------- 1.1.1.1 5.5.5.5 25 -/32832 I Tunnel1 1.1.1.1 5.5.5.5 25 *-/32831 I Tunnel1 ------------------------------------------------------------------------------- R: Role, I: Ingress, T: Transit, E: Egress
Run the display mpls te tunnel path command on LSRA to view the primary CR-LSP and detour LSP information. The command output shows that a detour LSP has been established to provide node protection on LSRA, and another detour LSP has been established to provide link protection on LSRC.[~LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel1 Lsp ID : 1.1.1.1 :100 :25 Hop Information Hop 0 10.1.1.1 Local-Protection available | node Hop 1 10.1.1.2 Label 32832 Hop 2 3.3.3.3 Label 32832 Hop 3 10.1.3.1 Local-Protection available Hop 4 10.1.3.2 Label 3 Hop 5 5.5.5.5 Label 3 Tunnel Interface Name : Tunnel1 Lsp ID : 1.1.1.1 :100 :25 Detour Lsp PLR ID :10.1.2.1 Hop Information Hop 0 10.1.2.1 Hop 1 10.1.2.2 Label 32831 Hop 2 2.2.2.2 Label 32831 Hop 3 10.1.6.1 Hop 4 10.1.6.2 Label 32832 Hop 5 4.4.4.4 Label 32832 Hop 6 10.1.7.1 Hop 7 10.1.7.2 Label 32831 Hop 8 6.6.6.6 Label 32831 Hop 9 10.1.5.2 Hop 10 10.1.5.1 Label 3 Hop 11 5.5.5.5 Label 3
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0010.0100.1001.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.2.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 5.5.5.5 mpls te record-route label mpls te detour mpls te tunnel-id 100 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0020.0200.2002.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.2.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.6.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0030.0300.3003.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.4.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.3.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # return
LSRD configuration file
# sysname LSRD # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0040.0400.4004.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.6.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.4.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.7.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # return
LSRE configuration file
# sysname LSRE # mpls lsr-id 5.5.5.5 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0050.0500.5005.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.3.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.5.1 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 isis enable 1 # return
LSRF configuration file
# sysname LSRF # mpls lsr-id 6.6.6.6 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 10.0001.0060.0600.6006.00 traffic-eng level-2 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.5.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.7.2 255.255.255.0 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 isis enable 1 # return
Example for Configure a Hot-standby CR-LSP
Networking Requirements
Figure 1-2251 illustrates an MPLS VPN network. A TE tunnel is established from PE1 to PE2. A hot-standby CR-LSP and a best-effort path are configured. The networking is as follows:
The primary CR-LSP is along the path PE1 -> P1 -> PE2.
The hot-standby CR-LSP is along the path PE1 -> P2 -> PE2.
If the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary CR-LSP recovers, traffic switches back to the primary CR-LSP after a 15-second delay. If both the primary and backup CR-LSPs fail, traffic switches to the best-effort path. Explicit paths can be configured for the primary and backup CR-LSPs. A best-effort path can be generated automatically. In this example, the best-effort path is PE1 -> P2 -> P1 -> PE2. The calculated best-effort path varies according to the faulty node.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to every interface and configure an IGP to implement connectivity.
Configure basic MPLS and MPLS TE functions.
Configure explicit paths on PE1 for the primary and hot-standby CR-LSPs.
Create a tunnel destined for PE2; specify explicit paths; enable hot standby; configure a best-effort path; set the switchback delay time to 15 seconds on PE1.
Data Preparation
To complete the configuration, you need the following data:
IGP type and data
MPLS LSR IDs
Tunnel interface number and bandwidth
Explicit paths for the primary and hot-standby CR-LSPs
Procedure
- Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every interface and configure a loopback interface address as an LSR ID on every node. For configuration details, see Configuration Files in this section.
- Configure an IGP.
Configure OSPF or IS-IS on every node to implement connectivity between them. IS-IS is used in this example. For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions.
Set an LSR ID in the system view, and enable MPLS in the system and interface views on every node. For configuration details, see Configuration Files in this section.
- Configure basic MPLS TE functions.
Enable MPLS TE and RSVP-TE in the MPLS and interface views on every node. For configuration details, see Configuration Files in this section.
- Configure IS-IS TE and CSPF.
Configure IS-IS TE on all nodes and enable CSPF on PE1. For configuration details, see Configuring an RSVP-TE Tunnel.
- Configure explicit paths for the primary and hot-standby CR-LSPs.
# Configure an explicit path for the primary CR-LSP on PE1.
<PE1> system-view
[~PE1] explicit-path main
[*PE1-explicit-path-main] next hop 10.4.1.2
[*PE1-explicit-path-main] next hop 10.2.1.2
[*PE1-explicit-path-main] next hop 3.3.3.3
[*PE1-explicit-path-main] quit
# Configure an explicit path for the hot-standby CR-LSP on PE1.
[*PE1] explicit-path backup
[*PE1-explicit-path-backup] next hop 10.3.1.2
[*PE1-explicit-path-backup] next hop 10.5.1.2
[*PE1-explicit-path-backup] next hop 3.3.3.3
[*PE1-explicit-path-backup] commit
[~PE1-explicit-path-backup] quit
# After completing the configurations, run the display explicit-path main command on PE1. Information about the explicit paths for the primary and hot-standby CR-LSPs is displayed.
[~PE1] display explicit-path main
Path Name : main Path Status : Enabled
1 10.4.1.2 Strict Include
2 10.2.1.2 Strict Include
3 3.3.3.3 Strict Include
[~PE1] display explicit-path backup
Path Name : backup Path Status : Enabled
1 10.3.1.2 Strict Include
2 10.5.1.2 Strict Include
3 3.3.3.3 Strict Include
- Configure tunnel interfaces.
# Create a tunnel interface on PE1 and specify an explicit path on PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 3.3.3.3
[*PE1-Tunnel1] mpls te tunnel-id 502
[*PE1-Tunnel1] mpls te path explicit-path main
# Configure hot standby on the tunnel interface; set the switchback delay time to 15 seconds; specify an explicit path; configure a best-effort path.
[*PE1-Tunnel1] mpls te backup hot-standby mode revertive wtr 15
[*PE1-Tunnel1] mpls te path explicit-path backup secondary
[*PE1-Tunnel1] mpls te backup ordinary best-effort
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
After completing the configurations, run the display mpls te tunnel-interface tunnel1 command on PE1. Both the primary and hot-standby CR-LSPs have been established.
[~PE1] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : Primary CR-LSP Up and HotBackup CR-LSP Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 502 Ingress LSR ID : 4.4.4.4 Egress LSR ID: 3.3.3.3 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 161 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: - Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : - Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : Hot-Standby BestEffort : Enabled Secondary HopLimit : 32 BestEffort HopLimit : - Secondary Explicit Path Name: backup Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Associated Tunnel Group ID: - Associated Tunnel Group Type: - Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : - Primary LSP ID : 4.4.4.4:424 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : main Hop Limit: 32 Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : - Backup LSP ID : 4.4.4.4:423 IsBestEffortPath : No LSP State : UP LSP Type : Hot-Standby Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : backup Hop Limit: 32 Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : -
# Run the following command on PE1. Hot standby information is displayed.
[~PE1] display mpls te hot-standby state interface Tunnel1
---------------------------------------------------------------- Verbose information about the Tunnel1 hot-standby state ---------------------------------------------------------------- Tunnel Name : Tunnel1 Session ID : 502 Main LSP index : 0xC1 Hot-Standby LSP index : 0xE1 HSB switch result : main LSP HSB switch reason : - WTR config time : 15 s WTR remain time : - Using overlapped path : no Fast switch status : no
# Run the ping lsp te command. The hot-standby CR-LSP is reachable.[~PE1] ping lsp te tunnel1 hot-standby
LSP PING FEC: RSVP IPV4 SESSION QUERY Tunnel1 : 100 data bytes, press CTRL_C to break Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Reply from 3.3.3.3: bytes=100 Sequence=2 time = 3 ms Reply from 3.3.3.3: bytes=100 Sequence=3 time = 3 ms Reply from 3.3.3.3: bytes=100 Sequence=4 time = 3 ms Reply from 3.3.3.3: bytes=100 Sequence=5 time = 6 ms --- FEC: RSVP IPV4 SESSION QUERY Tunnel1 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 3/3/6 ms
# Run the tracert lsp te command on PE1. The path for the hot-standby CR-LSP is reachable.[~PE1] tracert lsp te tunnel1 hot-standby LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break. TTL Replier Time Type Downstream 0 Ingress 10.3.1.2/[13313 ] 1 10.3.1.2 90 ms Transit 10.5.1.2/[3 ] 2 3.3.3.3 130 ms Egress
- Verify the configuration.
Connect port 1 and port 2 on a tester to PE1 and PE2, respectively. Set correct label values. Inject MPLS traffic from port 1 to port 2. After the cable is removed from GE 0/1/8 on PE1 or GE 0/1/8 on P1, traffic is restored within milliseconds. Run the display mpls te hot-standby state interface tunnel1 command on PE1. Traffic has switched to the hot-standby CR-LSP.
[~PE1] display mpls te hot-standby state interface tunnel1
---------------------------------------------------------------- Verbose information about the Tunnel1 hot-standby state ---------------------------------------------------------------- Tunnel Name : Tunnel1 Session ID : 502 Main LSP index : 0x0 Hot-Standby LSP index : 0xE1 HSB switch result : hot-standby LSP HSB switch reason : signal fail WTR config time : 10 s WTR remain time : - Using overlapped path : no Fast switch status : no
Insert the cable into GE 0/1/8 and wait 15 seconds. It can be seen that traffic switches back to the primary CR-LSP.
If the cables to GE 0/1/8 on PE1 (or GE 0/1/8 on P1) and PE2 (or P2) are removed, the tunnel interface goes Down and then Up. A best-effort path is established and takes over traffic.
[~PE1] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : Backup CR-LSP In use and Primary CR-LSP setting Up Tunnel Attributes : Active LSP : BestEffort LSP Traffic Switch : - Session ID : 502 Ingress LSR ID : 4.4.4.4 Egress LSR ID: 3.3.3.3 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 161 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : 0 sec Low Value : 0 kbps High Value : 0 kbps Readjust Value : 0 kbps Offload Explicit Path Name: - Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : - Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : BestEffort BestEffort : Enabled Secondary HopLimit : 32 BestEffort HopLimit : - Secondary Explicit Path Name: backup Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Associated Tunnel Group ID: - Associated Tunnel Group Type: - Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 4.4.4.4:436 LSP State : DOWN LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : main Hop Limit: 32 Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : - Backup LSP ID : 4.4.4.4:440 IsBestEffortPath : No LSP State : DOWN LSP Type : Hot-Standby Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : backup Hop Limit: 32 Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : EXPLICIT Create Modify LSP Reason: - Self-Ping Status : - Backup LSP ID : 4.4.4.4:439 IsBestEffortPath : Yes LSP State : UP LSP Type : BestEffort Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
[~PE1] display mpls te tunnel path
Tunnel Interface Name : Tunnel1 Lsp ID : 4.4.4.4 :502 :32776 Hop Information Hop 0 10.3.1.1 Hop 1 10.3.1.2 Hop 2 2.2.2.2 Hop 3 10.1.1.2 Hop 4 10.1.1.1 Hop 5 1.1.1.1 Hop 6 10.2.1.1 Hop 7 10.2.1.2 Hop 8 3.3.3.3
Configuration Files
PE1 configuration file
#
sysname PE1
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path backup
next hop 10.3.1.2
next hop 10.5.1.2
next hop 3.3.3.3
#
explicit-path main
next hop 10.4.1.2
next hop 10.2.1.2
next hop 3.3.3.3
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0004.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.3.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.4.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te tunnel-id 502 mpls te record-route mpls te backup hot-standby mode revertive wtr 15 mpls te backup ordinary best-effort mpls te path explicit-path main mpls te path explicit-path backup secondary
#
return
P1 configuration file
#
sysname P1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.4.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
isis enable 1
ip address 1.1.1.1 255.255.255.255
#
return
P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.5.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/16
undo shutdown
ip address 10.3.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet0/1/8
undo shutdown
ip address 10.5.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
Example for Configuring a Tunnel Protection Group Consisting of Bidirectional Co-routed CR-LSPs
A tunnel protection group provides end-to-end protection for MPLS TE traffic if a network fault occurs. This section provides an example for configuring a tunnel protection group consisting of static bidirectional co-routed CR-LSPs.
Context
A tunnel protection group consists of static bidirectional co-routed CR-LSPs. If the working tunnel fails, forward traffic and reverse traffic are both switched to the protection tunnel, which helps improve network reliability.
On the MPLS network shown in Figure 1-2252, a working tunnel is established over the path LSRA -> LSRB -> LSRC, and a protection tunnel is established over the path LSRA -> LSRD -> LSRC. To ensure that MPLS TE traffic is not interrupted if a fault occurs, configure static bidirectional co-routed CR-LSPs for both working and protection tunnels and combine them into a tunnel protection group.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface and configure a routing protocol.
Configure basic MPLS functions and enable MPLS TE.
Configure the ingress, transit nodes, and egress for each static bidirectional co-routed CR-LSP.
Configure MPLS TE tunnel interfaces for the working and protection tunnels and bind a specific static bidirectional co-routed CR-LSP to each tunnel interface.
Configure a tunnel protection group.
Configure a detection mechanism to monitor the configured tunnel protection group. MPLS-TP OAM is used in this example.
Data Preparation
To complete the configuration, you need the following data:
Tunnel interface names, tunnel interface IP addresses, destination addresses, tunnel IDs, and tunnel signaling protocol (CR-Static) on LSRA and LSRC
Next-hop address and outgoing label on the ingress
Inbound interface name, next-hop address, and outgoing label on the transit node
Inbound interface name on the egress
Procedure
- Assign an IP address to each interface and configure a routing protocol.
# Assign an IP address and a mask to each interface and configure static routes so that all LSRs can interconnect with each other.
For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions and enable MPLS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] quit
[*LSRA] interface gigabitethernet 0/1/1
[*LSRA-GigabitEthernet0/1/1] mpls
[*LSRA-GigabitEthernet0/1/1] mpls te
[*LSRA-GigabitEthernet0/1/1] commit
[~LSRA-GigabitEthernet0/1/1] quit
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see Configuration Files in this section.
- Configure the ingress, transit nodes, and egress for each static bidirectional co-routed CR-LSP.# Configure LSRA as the ingress on both the working and protection static bidirectional co-routed CR-LSPs.
[~LSRA] bidirectional static-cr-lsp ingress Tunnel10 [*LSRA-bi-static-ingress-Tunnel10] forward nexthop 2.1.1.2 out-label 20 [*LSRA-bi-static-ingress-Tunnel10] backward in-label 20 [*LSRA-bi-static-ingress-Tunnel10] quit
[*LSRA] bidirectional static-cr-lsp ingress Tunnel11 [*LSRA-bi-static-ingress-Tunnel11] forward nexthop 4.1.1.2 out-label 21 [*LSRA-bi-static-ingress-Tunnel11] backward in-label 21 [*LSRA-bi-static-ingress-Tunnel11] commit [~LSRA-bi-static-ingress-Tunnel11] quit
# Configure LSRB as a transit node on the working static bidirectional co-routed CR-LSP.[~LSRB]bidirectional static-cr-lsp transit lsp1 [*LSRB-bi-static-transit-lsp1] forward in-label 20 nexthop 3.2.1.2 out-label 40 [*LSRB-bi-static-transit-lsp1] backward in-label 16 nexthop 2.1.1.1 out-label 20 [*LSRB-bi-static-transit-lsp1] commit [~LSRB-bi-static-transit-lsp1] quit
# Configure LSRD as a transit node on the protection static bidirectional co-routed CR-LSP.[~LSRD]bidirectional static-cr-lsp transit lsp2 [*LSRD-bi-static-transit-lsp2] forward in-label 21 nexthop 3.4.1.2 out-label 41 [*LSRD-bi-static-transit-lsp2] backward in-label 17 nexthop 4.1.1.1 out-label 21 [*LSRD-bi-static-transit-lsp2] commit [~LSRD-bi-static-transit-lsp2] quit
# Configure LSRC as the egress on both the working and protection static bidirectional co-routed CR-LSPs.[~LSRC] bidirectional static-cr-lsp egress lsp1 [*LSRC-bi-static-egress-lsp1] forward in-label 40 lsrid 1.1.1.1 tunnel-id 100 [*LSRC-bi-static-egress-lsp1] backward nexthop 3.2.1.1 out-label 16 [*LSRC-bi-static-egress-lsp1] quit
[*LSRC] bidirectional static-cr-lsp egress lsp2 [*LSRC-bi-static-egress-lsp2] forward in-label 41 lsrid 1.1.1.1 tunnel-id 101 [*LSRC-bi-static-egress-lsp2] backward nexthop 3.4.1.1 out-label 17 [*LSRC-bi-static-egress-lsp2] commit [~LSRC-bi-static-egress-lsp2] quit
- Configure MPLS TE tunnel interfaces for the working and protection tunnels and bind a specific static bidirectional co-routed CR-LSP to each tunnel interface.
# On LSRA, configure MPLS TE tunnel interfaces named Tunnel 10 and Tunnel 11.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] destination 3.3.3.3
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te signal-protocol cr-static
[*LSRA-Tunnel10] mpls te bidirectional
[*LSRA-Tunnel10] quit
[*LSRA] interface Tunnel 11
[*LSRA-Tunnel11] ip address unnumbered interface loopback 1
[*LSRA-Tunnel11] tunnel-protocol mpls te
[*LSRA-Tunnel11] destination 3.3.3.3
[*LSRA-Tunnel11] mpls te tunnel-id 101
[*LSRA-Tunnel11] mpls te signal-protocol cr-static
[*LSRA-Tunnel11] mpls te bidirectional
[*LSRA-Tunnel11] commit
[~LSRA-Tunnel11] quit
# On LSRC, configure MPLS TE tunnel interfaces named Tunnel 20 and Tunnel 21.
[~LSRC] interface Tunnel 20
[*LSRC-Tunnel20] ip address unnumbered interface loopback 1
[*LSRC-Tunnel20] tunnel-protocol mpls te
[*LSRC-Tunnel20] destination 1.1.1.1
[*LSRC-Tunnel20] mpls te tunnel-id 200
[*LSRC-Tunnel20] mpls te signal-protocol cr-static
[*LSRC-Tunnel20] mpls te passive-tunnel
[*LSRC-Tunnel20] mpls te binding bidirectional static-cr-lsp egress lsp1
[*LSRC-Tunnel20] quit
[*LSRC] interface Tunnel 21
[*LSRC-Tunnel21] ip address unnumbered interface loopback 1
[*LSRC-Tunnel21] tunnel-protocol mpls te
[*LSRC-Tunnel21] destination 1.1.1.1
[*LSRC-Tunnel21] mpls te tunnel-id 201
[*LSRC-Tunnel21] mpls te signal-protocol cr-static
[*LSRC-Tunnel21] mpls te passive-tunnel
[*LSRC-Tunnel21] mpls te binding bidirectional static-cr-lsp egress lsp2
[*LSRC-Tunnel21] commit
[~LSRC-Tunnel21] quit
- Configure an MPLS TE tunnel protection group.# On LSRA, configure a tunnel protection group that consists of a working tunnel named Tunnel 10 and its protection tunnel named Tunnel 11.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] mpls te protection tunnel 101 mode revertive wtr 0
[*LSRA-Tunnel10] commit
[~LSRA-Tunnel10] quit
# On LSRC, configure a tunnel protection group that consists of a working tunnel named Tunnel 20 and its protection tunnel named Tunnel 21.
[~LSRC] interface Tunnel 20
[*LSRC-Tunnel20] mpls te protection tunnel 201 mode revertive wtr 0
[*LSRC-Tunnel20] commit
[~LSRC-Tunnel20] quit
- Configure a detection mechanism to monitor the configured tunnel protection group. MPLS-TP OAM is used in this example.On LSRA, configure MPLS-TP OAM on Tunnel 10.
[~LSRA] mpls-tp meg abc
[~LSRA-mpls-tp-meg-abc] me te interface Tunnel10 mep-id 1 remote-mep-id 2
[*LSRA-mpls-tp-meg-abc] commit
[~LSRA-mpls-tp-meg-abc] quit
On LSRC, configure MPLS-TP OAM on Tunnel 20.[~LSRC] mpls-tp meg abc
[~LSRC-mpls-tp-meg-abc] me te interface Tunnel20 mep-id 2 remote-mep-id 1
[*LSRC-mpls-tp-meg-abc] commit
[~LSRC-mpls-tp-meg-abc] quit
- Verify the configuration.
After completing the configuration, run the display mpls te protection tunnel all verbose command on LSRA. The command output shows that the tunnel interfaces are working properly.
# Check the configuration results on LSRA.[~LSRA] display mpls te protection tunnel all verbose ---------------------------------------------------------------- Verbose information about the No."1" protection-group ---------------------------------------------------------------- Work-tunnel id : 100 Protect-tunnel id : 101 Work-tunnel name : Tunnel10 Protect-tunnel name : Tunnel11 Work-tunnel reverse-lsp : - Protect-tunnel reverse-lsp : - Bridge type : 1:1 Switch type : bidirectional Switch result : work-tunnel Tunnel using Best-Effort : none Tunnel using Ordinary : none Work-tunnel frr in use : none Work-tunnel defect state : non-defect Protect-tunnel defect state : non-defect Work-tunnel forward-lsp defect state : non-defect Protect-tunnel forward-lsp defect state : non-defect Work-tunnel reverse-lsp defect state : non-defect Protect-tunnel reverse-lsp defect state : non-defect HoldOff config time : 0ms HoldOff remain time : - WTR config time : 0s WTR remain time : - Mode : revertive Using same path : - Local state : no request Far end request : no request
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te # bidirectional static-cr-lsp ingress Tunnel10 forward nexthop 2.1.1.2 out-label 20 backward in-label 20 # bidirectional static-cr-lsp ingress Tunnel11 forward nexthop 4.1.1.2 out-label 21 backward in-label 21 # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.1 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/1 undo shutdown ip address 4.1.1.1 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te signal-protocol cr-static mpls te tunnel-id 100 mpls te bidirectional mpls te protection tunnel 101 mode revertive wtr 0 # interface Tunnel11 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te signal-protocol cr-static mpls te tunnel-id 101 mpls te bidirectional # ip route-static 2.2.2.2 255.255.255.255 2.1.1.2 ip route-static 3.3.3.3 255.255.255.255 2.1.1.2 ip route-static 3.3.3.3 255.255.255.255 4.1.1.2 ip route-static 4.4.4.4 255.255.255.255 4.1.1.2 # mpls-tp meg abc me te interface Tunnel10 mep-id 1 remote-mep-id 2 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te # bidirectional static-cr-lsp transit lsp1 forward in-label 20 nexthop 3.2.1.2 out-label 40 backward in-label 16 nexthop 2.1.1.1 out-label 20 # interface GigabitEthernet0/1/0 undo shutdown ip address 2.1.1.2 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/1 undo shutdown ip address 3.2.1.1 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # ip route-static 1.1.1.1 255.255.255.255 2.1.1.1 ip route-static 3.3.3.3 255.255.255.255 3.2.1.2 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te # bidirectional static-cr-lsp egress lsp1 forward in-label 40 lsrid 1.1.1.1 tunnel-id 100 backward nexthop 3.2.1.1 out-label 16 # bidirectional static-cr-lsp egress lsp2 forward in-label 41 lsrid 1.1.1.1 tunnel-id 101 backward nexthop 3.4.1.1 out-label 17 # interface GigabitEthernet0/1/0 undo shutdown ip address 3.2.1.2 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/1 undo shutdown ip address 3.4.1.2 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.1 mpls te signal-protocol cr-static mpls te tunnel-id 200 mpls te passive-tunnel mpls te binding bidirectional static-cr-lsp egress lsp1 mpls te protection tunnel 201 mode revertive wtr 0 # interface Tunnel21 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.1 mpls te signal-protocol cr-static mpls te tunnel-id 201 mpls te passive-tunnel mpls te binding bidirectional static-cr-lsp egress lsp2 # ip route-static 1.1.1.1 255.255.255.255 3.2.1.1 ip route-static 1.1.1.1 255.255.255.255 3.4.1.1 ip route-static 2.2.2.2 255.255.255.255 3.2.1.1 ip route-static 4.4.4.4 255.255.255.255 3.4.1.1 # mpls-tp meg abc me te interface Tunnel20 mep-id 2 remote-mep-id 1 # return
LSRD configuration file
# sysname LSRD # mpls lsr-id 4.4.4.4 # mpls mpls te # bidirectional static-cr-lsp transit lsp2 forward in-label 21 nexthop 3.4.1.2 out-label 41 backward in-label 17 nexthop 4.1.1.1 out-label 21 # interface GigabitEthernet0/1/0 undo shutdown ip address 4.1.1.2 255.255.255.0 mpls mpls te # interface GigabitEthernet0/1/1 undo shutdown ip address 3.4.1.1 255.255.255.0 mpls mpls te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # ip route-static 1.1.1.1 255.255.255.255 4.1.1.1 ip route-static 3.3.3.3 255.255.255.255 3.4.1.2 # return
Example for Configuring Isolated LSP Computation
This section provides an example for configuring isolated label switched path (LSP) computation.
Networking Requirements
Isolated primary and hot-standby LSPs are necessary to improve the LSP reliability on IP radio access networks (IP RANs) that use Multiprotocol Label Switching (MPLS) Traffic Engineering (TE). The constrained shortest path first (CSPF) algorithm does not meet this reliability requirement because CSPF may compute two LSPs that intersect at aggregation nodes. Specifying explicit paths for LSPs can improve reliability but this method does not adapt to topology changes. Each time a node is added to or deleted from the IP RAN, operators must configure new explicit paths, which is time-consuming and laborious. Isolated LSP computation is another method to improve reliability. After this function is configured, the device uses both the disjoint and CSPF algorithms to compute isolated primary and hot-standby LSPs.
Figure 1-2253 illustrates an IP RAN that uses a Resource Reservation Protocol - Traffic Engineering (RSVP-TE) tunnel. Devices on this network use the Open Shortest Path First (OSPF) protocol for communication. The numeral on each link represents the link TE metric. An RSVP-TE tunnel needs to be established between LSRA and LSRF. The constraint-based routed label switched path (CR-LSP) hot standby function needs to be enabled.
Two isolated LSPs exist on this topology: LSRA -> LSRC -> LSRE -> LSRF and LSRA -> LSRB -> LSRD -> LSRF. However, if the disjoint algorithm is not enabled, CSPF computes LSRA -> LSRC-> LSRD-> LSRF as the primary LSP and cannot compute an isolated hot-standby LSP. To improve LSP reliability, configure isolated LSP computation.
Device Name |
Interface Name |
IP Address and Mask |
Device Name |
Interface Name |
IP Address and Mask |
---|---|---|---|---|---|
LSRA |
Loopback1 |
1.1.1.1/32 |
LSRB |
Loopback1 |
2.2.2.2/32 |
GE 0/1/0 |
10.1.1.1/24 |
GE 0/1/0 |
10.1.2.2/24 |
||
GE 0/1/1 |
10.1.2.1/24 |
GE 0/1/1 |
10.1.6.1/24 |
||
LSRC |
Loopback1 |
3.3.3.3/32 |
LSRD |
Loopback1 |
4.4.4.4/32 |
GE 0/1/0 |
10.1.1.2/24 |
GE 0/1/0 |
10.1.6.2/24 |
||
GE 0/1/1 |
10.1.3.1/24 |
GE 0/1/1 |
10.1.3.2/24 |
||
GE 0/1/2 |
10.1.4.1/24 |
GE 0/1/2 |
10.1.7.1/24 |
||
LSRE |
Loopback1 |
5.5.5.5/32 |
LSRF |
Loopback1 |
6.6.6.6/32 |
GE 0/1/0 |
10.1.4.2/24 |
GE 0/1/0 |
10.1.7.2/24 |
||
GE 0/1/1 |
10.1.5.1/24 |
GE 0/1/1 |
10.1.5.2/24 |
Configuration Roadmap
The configuration roadmap is as follows:
Assign addresses to all physical and loopback interfaces listed in Table 1-946.
Globally enable OSPF on each device so that OSPF advertises segment routes of each physical and loopback interface. Enable OSPF TE in the area where the devices reside.
Set MPLS label switching router (LSR) IDs for all devices and globally enable MPLS, TE, RSVP-TE, and CSPF.
Enable MPLS, TE, and RSVP-TE on the outbound interfaces of all links along the TE tunnel. Set a TE metric for each link according to Figure 1-2253.
Create a tunnel interface on LSRA and specify the IP address, tunnel protocol, destination address, tunnel ID, and signaling protocol RSVP-TE for the tunnel interface.
Enable the CR-LSP hot standby function and the disjoint algorithm on the tunnel interface.
Data Preparation
To complete the configuration, you need the following data:
IP address for each interface (see Table 1-946.)
OSPF process ID (1) and area ID (0.0.0.0)
TE metric for each link (see Figure 1-2253.)
Loopback interface address for each MPLS LSR ID
Tunnel interface number (Tunnel1), tunnel ID (1), loopback interface address to be borrowed, destination address (6.6.6.6), and signaling protocol (RSVP-TE)
Procedure
- Assign an IP address to each interface.
Assign an IP address to each interface and create a loopback interface on each device, according to Table 1-946. For detailed configurations, see Configuration Files in this section.
- Enable OSPF on each device.
Enable basic OSPF functions and MPLS TE on each device.
# Configure LSRA.
<LSRA> system-view
[~LSRA] ospf 1
[*LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0.0.0.0
[*LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*LSRA-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] commit
[~LSRA-ospf-1-area-0.0.0.0] quit
[~LSRA-ospf-1] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
Enable MPLS, MPLS TE, RSVP-TE, and CSPF on each device. Enable MPLS, TE, and RSVP-TE on the outbound interface of each link. Set a TE metric for each link.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 0/1/0
[*LSRA-GigabitEthernet0/1/0] mpls
[*LSRA-GigabitEthernet0/1/0] mpls te
[*LSRA-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/0] mpls te metric 1
[*LSRA-GigabitEthernet0/1/0] quit
[*LSRA] interface gigabitethernet 0/1/1
[*LSRA-GigabitEthernet0/1/1] mpls
[*LSRA-GigabitEthernet0/1/1] mpls te
[*LSRA-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/1] mpls te metric 10
[*LSRA-GigabitEthernet0/1/1] quit
[*LSRA] commit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details, see Configuration Files in this section.
- Configure an MPLS TE tunnel interface.
# Configure LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface LoopBack1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 6.6.6.6
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te signal-protocol rsvp-te
[*LSRA-Tunnel1] commit
- Configure isolated LSP computation.
Enable the CR-LSP hot standby function and the disjoint algorithm on the tunnel interface.
# Configure LSRA.
[~LSRA-Tunnel1] mpls te backup hot-standby
[*LSRA-Tunnel1] mpls te cspf disjoint
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
- Verify the configuration.
# Run the display mpls te cspf destination 6.6.6.6 computation-mode disjoint command on LSRA. The command output shows that the primary LSP is LSRA -> LSRC -> LSRE -> LSRF and the hot-standby LSP is LSRA-> LSRB-> LSRD-> LSRF. The two LSPs do not intersect.
[~LSRA] display mpls te cspf destination 6.6.6.6 computation-mode disjoint
Main path for the given constraints is: 1.1.1.1 Include LSR-ID 10.1.1.1 Include 10.1.1.2 Include 3.3.3.3 Include LSR-ID 10.1.4.1 Include 10.1.4.2 Include 5.5.5.5 Include LSR-ID 10.1.5.1 Include 10.1.5.2 Include 6.6.6.6 Include LSR-ID The total metrics of the calculated path is : 16 Hot-standby path for the given constraints is: 1.1.1.1 Include LSR-ID 10.1.2.1 Include 10.1.2.2 Include 2.2.2.2 Include LSR-ID 10.1.6.1 Include 10.1.6.2 Include 4.4.4.4 Include LSR-ID 10.1.7.1 Include 10.1.7.2 Include 6.6.6.6 Include LSR-ID Complete disjoint path computed and the total metrics of the calculated path is : 21
# Run the display mpls te tunnel-interface Tunnel1 and display mpls te tunnel path Tunnel1 commands on LSRA to view information about the primary and hot-standby LSPs.
[~LSRA] display mpls te tunnel-interface Tunnel1
Tunnel Name : Tunnel1 Signalled Tunnel Name: - Tunnel State Desc : CR-LSP is Up Tunnel Attributes : Active LSP : Primary LSP Traffic Switch : - Session ID : 1 Ingress LSR ID : 1.1.1.1 Egress LSR ID: 6.6.6.6 Admin State : UP Oper State : UP Signaling Protocol : RSVP FTid : 1 Tie-Breaking Policy : None Metric Type : None Bfd Cap : None Reopt : Disabled Reopt Freq : - Inter-area Reopt : Disabled Auto BW : Disabled Threshold : 0 percent Current Collected BW: 0 kbps Auto BW Freq : 0 Min BW : 0 kbps Max BW : 0 kbps Offload : Disabled Offload Freq : - Low Value : - High Value : - Readjust Value : - Offload Explicit Path Name: Tunnel Group : - Interfaces Protected: - Excluded IP Address : - Referred LSP Count : 0 Primary Tunnel : - Pri Tunn Sum : - Backup Tunnel : - Group Status : Up Oam Status : - IPTN InLabel : - Tunnel BFD Status : - BackUp LSP Type : Hot-Standby BestEffort : Enabled Secondary HopLimit : - BestEffort HopLimit : - Secondary Explicit Path Name: - Secondary Affinity Prop/Mask: 0x0/0x0 BestEffort Affinity Prop/Mask: 0x0/0x0 IsConfigLspConstraint: - Hot-Standby Revertive Mode: Revertive Hot-Standby Overlap-path: Disabled Hot-Standby Switch State: CLEAR Bit Error Detection: Disabled Bit Error Detection Switch Threshold: - Bit Error Detection Resume Threshold: - Ip-Prefix Name : - P2p-Template Name : - PCE Delegate : No LSP Control Status : Local control Path Verification : -- Entropy Label : None Auto BW Remain Time : 200 s Reopt Remain Time : 100 s Metric Inherit IGP : None Binding Sid : - Reverse Binding Sid : - Self-Ping : Disable Self-Ping Duration : 1800 sec FRR Attr Source : - Is FRR degrade down : No Primary LSP ID : 1.1.1.1:19 LSP State : UP LSP Type : Primary Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 10000 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Disabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Disabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : - Backup LSP ID : 1.1.1.9:46945 IsBestEffortPath : No LSP State : UP LSP Type : Hot-Standby Setup Priority : 7 Hold Priority: 7 IncludeAll : 0x0 IncludeAny : 0x0 ExcludeAny : 0x0 Affinity Prop/Mask : 0x0/0x0 Resv Style : SE Configured Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Actual Bandwidth Information: CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0 CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0 CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0 CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0 Explicit Path Name : - Hop Limit: - Record Route : Enabled Record Label : Disabled Route Pinning : Disabled FRR Flag : Disabled IdleTime Remain : - BFD Status : - Soft Preemption : Enabled Reroute Flag : Enabled Pce Flag : Normal Path Setup Type : CSPF Create Modify LSP Reason: - Self-Ping Status : -
[~LSRA] display mpls te tunnel path Tunnel1
Tunnel Interface Name : Tunnel1 Lsp ID : 1.1.1.1 :1 :2 Hop Information Hop 0 10.1.1.1 Hop 1 10.1.1.2 Hop 2 3.3.3.3 Hop 3 10.1.4.1 Hop 4 10.1.4.2 Hop 5 5.5.5.5 Hop 6 10.1.5.1 Hop 7 10.1.5.2 Hop 8 6.6.6.6 Tunnel Interface Name : Tunnel1 Lsp ID : 1.1.1.1 :1 :3 Hop Information Hop 0 10.1.2.1 Hop 1 10.1.2.2 Hop 2 2.2.2.2 Hop 3 10.1.6.1 Hop 4 10.1.6.2 Hop 5 4.4.4.4 Hop 6 10.1.7.1 Hop 7 10.1.7.2 Hop 8 6.6.6.6
The command outputs show that the computed primary and hot-standby LSPs are the same as the actual primary and hot-standby LSPs, indicating that the device has computed two isolated LSPs.
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 mpls mpls te mpls te metric 1 mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.2.1 255.255.255.0 mpls mpls te mpls te metric 10 mpls rsvp-te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 6.6.6.6 mpls te record-route mpls te backup hot-standby mpls te tunnel-id 1 mpls te cspf disjoint # ospf 1 opaque-capability enable area 0.0.0.0 network 1.1.1.1 0.0.0.0 network 10.1.1.0 0.0.0.255 network 10.1.2.0 0.0.0.255 mpls-te enable # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.2.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.6.1 255.255.255.0 mpls mpls te mpls te metric 10 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 10.1.2.0 0.0.0.255 network 10.1.6.0 0.0.0.255 mpls-te enable # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.3.1 255.255.255.0 mpls mpls te mpls te metric 1 mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.4.1 255.255.255.0 mpls mpls te mpls te metric 5 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 10.1.1.0 0.0.0.255 network 10.1.3.0 0.0.0.255 network 10.1.4.0 0.0.0.255 mpls-te enable # return
LSRD configuration file
# sysname LSRD # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.6.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.3.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.7.1 255.255.255.0 mpls mpls te mpls te metric 1 mpls rsvp-te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 10.1.3.0 0.0.0.255 network 10.1.6.0 0.0.0.255 network 10.1.7.0 0.0.0.255 mpls-te enable # return
LSRE configuration file
# sysname LSRE # mpls lsr-id 5.5.5.5 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.4.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.5.1 255.255.255.0 mpls mpls te mpls te metric 10 mpls rsvp-te # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 5.5.5.5 0.0.0.0 network 10.1.4.0 0.0.0.255 network 10.1.5.0 0.0.0.255 mpls-te enable # return
LSRF configuration file
# sysname LSRF # mpls lsr-id 6.6.6.6 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.7.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.5.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 6.6.6.6 0.0.0.0 network 10.1.5.0 0.0.0.255 network 10.1.7.0 0.0.0.255 mpls-te enable # return
Example for Configuring Static BFD for CR-LSP
By configuring static BFD for CR-LSP, enable a device to switch traffic to the backup CR-LSP if the primary CR-LSP fails. When the primary CR-LSP recovers, the traffic can switch back from the backup CR-LSP to the primary CR-LSP.
Networking Requirements
The primary CR-LSP is PE1 → P1 → PE2.
The backup CR-LSP is PE1 → P2 → PE2.
Two static BFD sessions are established to monitor the primary and backup CR-LSPs. After the configuration, the following objects are achieved:
If the primary CR-LSP fails, traffic switches to the backup CR-LSP.
If the primary CR-LSP recovers and the backup CR-LSP fails during the switchover time (15s), traffic switches back to the primary CR-LSP.
Configuration Roadmap
The configuration roadmap is as follows:
Configure CR-LSP hot standby.
- Configure reverse CR-LSPs for a BFD session.
A reverse CR-LSP must be established for each of the primary and hot-standby CR-LSPs.
On PE1, establish two BFD sessions and bind one to the primary CR-LSP and the other to the hot-standby CR-LSP; on PE2, establish two BFD sessions and bind both sessions to the IP link (PE2 → PE1).
Data Preparation
To complete the configuration, you need the following data:
Name of the BFD session
Local and remote discriminators of BFD sessions
Minimum intervals at which BFD packets are sent and received
Other data as described in Example for Configure a Hot-standby CR-LSP in HUAWEI NetEngine 8000 F1A series Router Configuration Guide -MPLS
Procedure
- Configure CR-LSP hot standby.
For configuration details, see Example for Configure a Hot-standby CR-LSP in HUAWEI NetEngine 8000 F1A series Router Configuration Guide - MPLS.
- Configure reverse CR-LSPs.
The reverse CR-LSP configuration on PE2 is similar to the forward CR-LSP configuration on PE1. For configuration details, see Configuration Files in this section.
- Configure BFD for CR-LSP.
# Establish BFD sessions between PE1 and PE2 to monitor the primary and backup CR-LSPs. Bind the BFD sessions on PE1 to the primary and backup CR-LSP and the BFD session on PE2 to the IP link. Set the minimum intervals at which BFD packets are sent and received to 100 milliseconds and the local BFD detection multiplier to 3.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[*PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd mainlsptope2 bind mpls-te interface tunnel 1 te-lsp
[*PE1-bfd-lsp-session-mainlsptope2] discriminator local 413
[*PE1-bfd-lsp-session-mainlsptope2] discriminator remote 314
[*PE1-bfd-lsp-session-mainlsptope2] min-tx-interval 100
[*PE1-bfd-lsp-session-mainlsptope2] min-rx-interval 100
[*PE1-bfd-lsp-session-mainlsptope2] process-pst
[*PE1-bfd-lsp-session-mainlsptope2] quit
[*PE1] bfd backuplsptope2 bind mpls-te interface tunnel 1 te-lsp backup
[*PE1-bfd-lsp-session-backuplsptope2] discriminator local 423
[*PE1-bfd-lsp-session-backuplsptope2] discriminator remote 324
[*PE1-bfd-lsp-session-backuplsptope2] min-tx-interval 100
[*PE1-bfd-lsp-session-backuplsptope2] min-rx-interval 100
[*PE1-bfd-lsp-session-backuplsptope2] process-pst
[*PE1-bfd-lsp-session-backuplsptope2] commit
[~PE1-bfd-lsp-session-backuplsptope2] quit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[*PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd mainlsptope2 bind mpls-te interface Tunnel2 te-lsp
[*PE2-bfd-lsp-session-mainlsptope2] discriminator local 314
[*PE2-bfd-lsp-session-mainlsptope2] discriminator remote 413
[*PE2-bfd-lsp-session-mainlsptope2] min-tx-interval 100
[*PE2-bfd-lsp-session-mainlsptope2] min-rx-interval 100
[*PE2-bfd-lsp-session-mainlsptope2] quit
[*PE2] bfd backuplsptope2 bind mpls-te interface Tunnel2 te-lsp backup
[*PE2-bfd-lsp-session-backuplsptope2] discriminator local 324
[*PE2-bfd-lsp-session-backuplsptope2] discriminator remote 423
[*PE2-bfd-lsp-session-backuplsptope2] min-tx-interval 100
[*PE2-bfd-lsp-session-backuplsptope2] min-rx-interval 100
[*PE2-bfd-lsp-session-backuplsptope2] commit
[*PE2-bfd-lsp-session-backuplsptope2] quit
# After completing the configuration, run the display bfd session discriminator local-discriminator-value command on PE1 and PE2. The status of BFD sessions is Up.
The following example uses the command output on PE1.
[~PE1] display bfd session discriminator 413
(w): State in WTR (*): State is invalid -------------------------------------------------------------------------------- Local Remote PeerIpAddr State Type InterfaceName -------------------------------------------------------------------------------- 413 314 3.3.3.3 Up S_TE_LSP Tunnel1 --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
[~PE1] display bfd session discriminator 423
(w): State in WTR (*): State is invalid -------------------------------------------------------------------------------- Local Remote PeerIpAddr State Type InterfaceName -------------------------------------------------------------------------------- 423 324 3.3.3.3 Up S_TE_LSP Tunnel1 --------------------------------------------------------------------------------
- Verify the configuration.
Connect port 1 and port 2 on a tester to PE1 and PE2, respectively. Set correct label values. Inject MPLS traffic destined for port 2 into port 1. Write down the label setting in MPLS packets. After the cable to GE 0/1/8 on PE1 or GE 0/1/8 on P1 is removed, the fault is rectified in milliseconds.
After inserting the cable into GE 0/1/8 and then removing the cable from GE 0/1/0 on PE1 within 15 seconds, the fault is rectified in milliseconds.
Configuration Files
PE1 configuration file
# sysname PE1 # bfd # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path backup next hop 10.3.1.2 next hop 10.5.1.2 next hop 3.3.3.3 # explicit-path main next hop 10.4.1.2 next hop 10.2.1.2 next hop 3.3.3.3 # isis 1 cost-style wide network-entity 10.0000.0000.0004.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 ip address 10.3.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 ip address 10.4.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface Tunnel 1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te tunnel-id 100 mpls te record-route mpls te path explicit-path main mpls te path explicit-path backup secondary mpls te backup hot-standby wtr 15 mpls te backup ordinary best-effort # bfd mainlsptope2 bind mpls-te interface Tunnel1 te-lsp discriminator local 413 discriminator remote 314 min-tx-interval 100 min-rx-interval 100 process-pst # bfd backuplsptope2 bind mpls-te interface Tunnel1 te-lsp backup discriminator local 423 discriminator remote 324 min-tx-interval 100 min-rx-interval 100 process-pst # return
P1 configuration file
# sysname P1 # mpls lsr-id 1.1.1.1 # mpls mpls te mpls rsvp-te # isis 1 cost-style wide network-entity 10.0000.0000.0001.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 ip address 10.1.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 ip address 10.4.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/16 ip address 10.2.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # return
P2 configuration file
# sysname P2 # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te # isis 1 cost-style wide network-entity 10.0000.0000.0002.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 ip address 10.1.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 ip address 10.5.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/16 ip address 10.3.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # return
PE2 configuration file
# sysname PE2 # bfd # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te mpls te cspf # isis 1 cost-style wide network-entity 10.0000.0000.0003.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 ip address 10.2.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 ip address 10.5.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Tunnel2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.4 mpls te record-route mpls te backup ordinary best-effort mpls te backup hot-standby mpls te tunnel-id 502 # bfd mainlsptope2 bind mpls-te interface Tunnel2 te-lsp discriminator local 314 discriminator remote 413 min-tx-interval 100 min-rx-interval 100 process-pst # bfd backuplsptope2 bind mpls-te interface Tunnel2 te-lsp backup discriminator local 324 discriminator remote 423 min-tx-interval 100 min-rx-interval 100 process-pst # return
Example for Configuring Dynamic BFD for CR-LSP
This section provides an example for configuring dynamic BFD for CR-LSP to ensure that hot standby is enabled and a best-effect LSP is established in a tunnel.
Networking Requirements
Figure 1-2255 shows the dynamic BFD for CR-LSP networking. A TE tunnel between PE1 and PE2 is established. Hot standby and a best-effort LSP are configured for the TE tunnel. If the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary CR-LSP recovers, traffic switches back to the primary CR-LSP after a 15-second delay. If both the primary and backup CR-LSPs fail, traffic switches to the best-effort path.
Dynamic BFD for CR-LSP is required to detect the primary and backup CR-LSPs. After the configuration, the following objects should be achieved:
If the primary CR-LSP fails, traffic switches to the backup CR-LSP at the millisecond level.
If the backup CR-LSP fails within 15 seconds after the primary CR-LSP recovers, traffic switches back to the primary CR-LSP.
Interfaces 1 through 3 in this example represent GE 0/1/0, GE 0/1/8, and GE 0/1/16, respectively.
Dynamic BFD configuration is simpler than static BFD configuration. In addition, dynamic BFD reduces the number of BFD sessions and uses less network resources because only a single BFD session can be created on a tunnel interface.
Configuration Roadmap
The configuration roadmap is as follows:
Configure CR-LSP hot standby according to Example for Configure a Hot-standby CR-LSP.
Enable BFD on the ingress of the tunnel. Configure MPLS TE BFD. Set the minimum intervals at which BFD packets are sent and received, and the local BFD detection multiplier.
Enable the capability of passively creating BFD sessions on the egress.
Data Preparation
To complete the configuration, you need the following data:
Minimum intervals at which BFD packets are sent and received on the ingress
Local BFD detection multiplier
For other data, see Example for Configure a Hot-standby CR-LSP.
Procedure
- Configure CR-LSP hot standby.
Configure the primary CR-LSP, hot-standby CR-LSP, and best-effort LSP based on Example for Configure a Hot-standby CR-LSP.
- Enable BFD on the ingress of the tunnel and configure MPLS TE BFD.
# Enable MPLS TE BFD on the tunnel interface of PE1. Set the minimum intervals at which BFD packets are sent and received to 100 milliseconds and the local BFD detection multiplier to 3.
<PE1> system-view
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] interface Tunnel 10
[*PE1-Tunnel10] mpls te bfd enable
[*PE1-Tunenl10] mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*PE1-Tunenl10] commit
- Enable the capability of passively creating BFD sessions on the egress of the tunnel.
<PE2> system-view
[~PE2] bfd
[*PE2-bfd] mpls-passive
[*PE2-bfd] commit
[~PE2-bfd] quit
# Run the display bfd session mpls-te interface Tunnel command on PE1 and PE2. The status of BFD sessions is Up.
[~PE1] display bfd session mpls-te interface Tunnel 10 te-lsp
(w): State in WTR (*): State is invalid -------------------------------------------------------------------------------- Local Remote PeerIpAddr State Type InterfaceName -------------------------------------------------------------------------------- 16385 16385 3.3.3.3 Up D_TE_LSP Tunnel10 -------------------------------------------------------------------------------- Total UP/DOWN Session Number : 1/0
- Verify the configuration.
Connect port 1 and port 2 on a tester to PE1 and PE2, respectively. Set correct label values. Inject traffic destined for port 2 into port 1. After the cable is removed from GE 0/1/8 on PE1 or P1, the fault is rectified within milliseconds.
After the cable is inserted into GE 0/1/8 and the cable is removed from GE 0/1/0 on PE1 after a 15-second delay, the fault is rectified within milliseconds.
Configuration Files
PE1 configuration file
# sysname PE1 # bfd # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path backup next hop 10.3.1.2 next hop 10.5.1.2 next hop 3.3.3.3 # explicit-path main next hop 10.4.1.2 next hop 10.2.1.2 next hop 3.3.3.3 # isis 1 cost-style wide network-entity 10.0000.0000.0004.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.4.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.3 mpls te record-route mpls te backup ordinary best-effort mpls te backup hot-standby mpls te tunnel-id 502 mpls te path explicit-path main mpls te path explicit-path backup secondary mpls te bfd enable mpls te bfd min-tx-interval 100 min-rx-interval 100 # return
P1 configuration file
# sysname P1 # mpls lsr-id 1.1.1.1 # mpls mpls te mpls rsvp-te # isis 1 cost-style wide network-entity 10.0000.0000.0001.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/8 undo shutdown ip address 10.4.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/16 undo shutdown ip address 10.2.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # return
P2 configuration file
# sysname P2 # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te # isis 1 cost-style wide network-entity 10.0000.0000.0002.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.5.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/16 undo shutdown ip address 10.3.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # return
PE2 configuration file
# sysname PE2 # bfd mpls-passive # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te # isis 1 cost-style wide network-entity 10.0000.0000.0003.00 traffic-eng level-1-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.2.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.5.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # return
Example for Configuring Static BFD for TE
After static BFD for TE is configured, the VPN is enabled to rapidly detect tunnel faults and perform traffic switchover.
Networking Requirements
Figure 1-2256 illustrates an MPLS network. Layer 2 devices (switches) are deployed between PE1 and PE2. PE1 is configured with VPN FRR and the MPLS TE tunnel. The primary path of VPN FRR is PE1 → Switch → PE2; the backup path of VPN FRR is PE1 → PE3. In a normal situation, VPN traffic is transmitted over the primary path. If the primary path fails, VPN traffic is switched to the backup path. BFD for TE is required to monitor the TE tunnel over the primary path and enable VPN to rapidly detect tunnel faults. Traffic rapidly switches between the primary and backup paths, and fault recovery is sped up.
Interfaces 1 and 2 in this example represent GE 0/1/0 and GE 0/1/8, respectively.
For simplicity, the IP addresses of the interfaces connected the PEs and the CEs are not shown in the diagram.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an MPLS network and establish bidirectional TE tunnels between PE1 and PE2, and between PE1 and PE3.
Configure VPN FRR on PE1.
Enable global BFD on PE1, PE2, and PE3.
Establish a BFD session on PE1 to monitor the TE tunnel over the primary path.
Establish a BFD session on PE2 and PE3, and specify the TE tunnel as the BFD reverse tunnel.
Data Preparation
To complete the configuration, you need the following data:
An IGP and its parameters
BGP AS number and interface names used by BGP sessions
MPLS LSR ID
Tunnel interface number and explicit paths
VPN instance name, RD, and route target (RT)
Name of the tunnel policy
Name of a BFD session
Local and remote discriminators of BFD sessions
Procedure
- Assign an IP address and a mask to each interface.
Assign an IP address to each interface according to Figure 1-2256, create loopback interfaces on routers, and configure the IP addresses of the loopback interfaces as MPLS LSR IDs. For configuration details, see Configuration Files in this section.
- Configure an IGP.
Configure OSPF or IS-IS on each router to ensure interworking between PE1 and PE2, and between PE1 and PE3. OSPF is used in the example. For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions.
On each router, configure an LSR ID and enable MPLS in the system and interface views. For configuration details, see Configuration Files in this section.
- Configure basic MPLS TE functions.
Enable MPLS TE and MPLS RSVP-TE in the MPLS and interface views on each LSR. For configuration details, see Configuration Files in this section.
- Enable OSPF TE and configure the CSPF.
Enable OSPF TE on each router and configure CSPF on PE1. For configuration details, see Configuration Files in this section.
- Configure tunnel interfaces.
Specify explicit paths between PE1 and PE2 and between PE1 and PE3. For PE1, two explicit paths must be specified.
# Configure the explicit paths between PE1 and PE2 and between PE2 and PE3.
[~PE1] explicit-path tope2
[*PE1-explicit-path-tope2] next hop 10.2.1.2
[*PE1-explicit-path-tope2] next hop 3.3.3.3
[*PE1-explicit-path-tope2] quit
[*PE1] explicit-path tope3
[*PE1-explicit-path-tope3] next hop 10.1.1.2
[*PE1-explicit-path-tope3] next hop 2.2.2.2
[*PE1-explicit-path-tope3] commit
[*PE1-explicit-path-tope3] quit
# Configure an explicit path between PE2 and PE1.
[~PE2] explicit-path tope1
[*PE2-explicit-path-tope1] next hop 10.2.1.1
[*PE2-explicit-path-tope1] next hop 1.1.1.1
[*PE2-explicit-path-tope1] commit
[*PE2-explicit-path-tope1] quit
# Configure an explicit path between PE3 and PE1.
[~PE3] explicit-path tope1
[*PE3-explicit-path-tope1] next hop 10.1.1.1
[*PE3-explicit-path-tope1] next hop 1.1.1.1
[*PE3-explicit-path-tope1] commit
[*PE3-explicit-path-tope1] quit
Create tunnel interfaces and specify explicit paths on PE1, PE2, and PE3. Bind the tunnel to the specified VPN. For PE1, two tunnel interfaces must be created. For PE1, two tunnel interfaces must be created.
# Configure PE1.
[~PE1] interface tunnel 2
[*PE1-Tunnel2] ip address unnumbered interface loopback 1
[*PE1-Tunnel2] tunnel-protocol mpls te
[*PE1-Tunnel2] destination 3.3.3.3
[*PE1-Tunnel2] mpls te tunnel-id 2
[*PE1-Tunnel2] mpls te path explicit-path tope2
[*PE1-Tunnel2] mpls te reserved-for-binding
[*PE1-Tunnel2] quit
[*PE1] interface tunnel 1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 2.2.2.2
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te path explicit-path tope3
[*PE1-Tunnel1] mpls te reserved-for-binding
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
# Configure PE2.
[~PE2] interface tunnel 2
[*PE2-Tunnel2] ip address unnumbered interface loopback 1
[*PE2-Tunnel2] tunnel-protocol mpls te
[*PE2-Tunnel2] destination 1.1.1.1
[*PE2-Tunnel2] mpls te tunnel-id 3
[*PE2-Tunnel2] mpls te path explicit-path tope1
[*PE2-Tunnel2] mpls te reserved-for-binding
[*PE2-Tunnel2] commit
[~PE2-Tunnel2] quit
# Configure PE3.
[~PE3] interface tunnel 1
[*PE3-Tunnel1] ip address unnumbered interface loopback 1
[*PE3-Tunnel1] tunnel-protocol mpls te
[*PE3-Tunnel1] destination 1.1.1.1
[*PE3-Tunnel1] mpls te tunnel-id 4
[*PE3-Tunnel1] mpls te path explicit-path tope1
[*PE3-Tunnel1] mpls te reserved-for-binding
[*PE3-Tunnel1] commit
[~PE3-Tunnel1] quit
After completing the preceding configuration, run the display mpls te tunnel-interface tunnel interface-number command on the PEs. The command output shows that the status of tunnel 1 and tunnel 2 on PE1, tunnel 2 on PE2, and tunnel 1 on PE3 is Up.
- Configure VPN FRR.
# Configure a VPN instance on PE1, PE2, and PE3. Set the VPN instance name to vpn1, RDs to 100:1, 100:2, and 100:3 respectively, and all RTs to 100:1. Configure the CEs to access the PEs. For configuration details, see Configuration Files in this section.
# Establish MP IBGP peer relationship between PE1 and PE2, and between PE1 and PE3. The BGP AS number of PE1, PE2, and PE3 are 100. The loopback interface Loopback1 on PE1, PE2, and PE3 is used as the interface to establish BGP sessions. For configuration details, see Configuration Files in this section.
# Configure tunnel policies for PE1, PE2, and PE3 and apply the policies to the VPN instances.
# Configure PE1.
[~PE1] tunnel-policy policy1
[*PE1-tunnel-policy-policy1] tunnel binding destination 3.3.3.3 te tunnel 2
[*PE1-tunnel-policy-policy1] tunnel binding destination 2.2.2.2 te tunnel 1
[*PE1-tunnel-policy-policy1] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] tnl-policy policy1
[*PE1-vpn-instance-vpn1] quit
# Configure PE2.
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te tunnel 2
[*PE2-tunnel-policy-policy1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] tnl-policy policy1
[*PE2-vpn-instance-vpn1] commit
[~PE2-vpn-instance-vpn1] quit
# Configure PE3.
[~PE3] tunnel-policy policy1
[*PE3-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te tunnel 1
[*PE3-tunnel-policy-policy1] quit
[*PE3] ip vpn-instance vpn1
[*PE3-vpn-instance-vpn1] tnl-policy policy1
[*PE3-vpn-instance-vpn1] commit
[~PE3-vpn-instance-vpn1] quit
# Configure VPN FRR on PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] auto-frr
[*PE1-bgp-vpn1] commit
[~PE1-bgp-vpn1] quit
[~PE1-bgp] quit
After the configuration is complete, CEs can communicate, and traffic flows through PE1, the switch, and PE2. If the cable to any interface connecting PE1 to PE2 is removed, or the switch fails, or PE2 fails, VPN traffic is switched to the backup path between PE1 and PE3. Time taken in fault recovery is close to the IGP convergence time.
- Configure BFD for TE.
# Configure a BFD session on PE1 to detect the TE tunnel of the primary path. Set the minimum intervals at which BFD packets are sent and received.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd pe1tope2 bind mpls-te interface tunnel2
[*PE1-bfd-lsp-session-pe1tope2] discriminator local 12
[*PE1-bfd-lsp-session-pe1tope2] discriminator remote 21
[*PE1-bfd-lsp-session-pe1tope2] min-tx-interval 100
[*PE1-bfd-lsp-session-pe1tope2] min-rx-interval 100
[*PE1-bfd-lsp-session-pe1tope2] process-pst
[*PE1-bfd-lsp-session-pe1tope2] commit
# Establish a BFD session on PE2 and specify the TE tunnel as the reverse BFD tunnel. Set the minimum intervals at which BFD packets are sent and received.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd pe2tope1 bind mpls-te interface tunnel2
[*PE2-bfd-lsp-session-pe2tope1] discriminator local 21
[*PE2-bfd-lsp-session-pe2tope1] discriminator remote 12
[*PE2-bfd-lsp-session-pe2tope1] min-tx-interval 100
[*PE2-bfd-lsp-session-pe2tope1] min-rx-interval 100
[*PE2-bfd-lsp-session-pe2tope1] commit
# After completing the configuration, run the display bfd session { all |discriminator discr-value | mpls-te interface interface-type interface-number } [ verbose ] command on PE1 and PE2. The command output shows that the BFD session is Up.
- Verify the configuration.
Connect tester's Port 1 and Port 2 to CE1 and CE2, respectively. Inject traffic destined for port 2 into port 1. The test shows that a fault can be rectified in milliseconds.
Configuration Files
Configuration files of CE1, CE2, and the switch and the configuration of PE accessing CE are not provided.
PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
route-distinguisher 100:1
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope2
next hop 10.2.1.2
next hop 3.3.3.3
#
explicit-path tope3
next hop 10.1.1.2
next hop 2.2.2.2
#
interface gigabitethernet0/1/8
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 1
mpls te path explicit-path tope3
mpls te reserved-for-binding
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 2
mpls te path explicit-path tope2
mpls te reserved-for-binding
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpn1
import-route direct
auto-frr
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.1.1.0 0.0.0.3
network 10.2.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 3.3.3.3 te Tunnel2
tunnel binding destination 2.2.2.2 te Tunnel1
#
bfd pe1tope2 bind mpls-te interface Tunnel2
discriminator local 12
discriminator remote 21
min-tx-interval 100
min-rx-interval 100
process-pst
#
return
PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
route-distinguisher 100:2
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
bfd
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope1
next hop 10.2.1.1
next hop 1.1.1.1
#
interface gigabitethernet0/1/8
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 3
mpls te path explicit-path tope1
mpls te reserved-for-binding
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.2.1.0 0.0.0.255
network 3.3.3.3 0.0.0.0
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel2
#
bfd pe2tope1 bind mpls-te interface Tunnel2
discriminator local 21
discriminator remote 12
min-tx-interval 100
min-rx-interval 100
#
return
PE3 configuration file
#
sysname PE3
#
ip vpn-instance vpn1
route-distinguisher 100:3
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope1
next hop 10.1.1.1
next hop 1.1.1.1
#
interface GigabitEthernet0/1/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 4
mpls te path explicit-path tope1
mpls te reserved-for-binding
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.1.1.0 0.0.0.3
network 2.2.2.2 0.0.0.0
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel1
return
Example for Configuring BFD for RSVP
This section provides an example for configuring BFD for RSVP for nodes to detect link failures and perform TE FRR switching on a network with Layer 2 devices between two RSVP nodes.
Networking Requirements
On the MPLS network shown in Figure 1-2257, a Layer 2 device (switch) is deployed on a link between P1 and P2. A primary MPLS TE tunnel between PE1 and PE2 is established over a path PE1 -> P1 -> switch -> P2 -> PE2. A TE FRR bypass tunnel between P1 and PE2 is established over the path P1 -> P3 -> PE2. P1 functions as the point of local repair (PLR), and PE2 functions as the merge point (MP).
If the link between the switch and P2 fails, P1 keeps sending the switch RSVP messages (including Hello messages) destined for P2 and detects the fault only after P1 fails to receive replies to RSVP Hello messages sent to P2.
The timeout period of RSVP neighbor relationships is three times as long as the interval between Hello message transmissions. After the timeout period elapses, P1 declares its neighbor Down, which is seconds slower than it does when there is no Layer 2 device. The fault detection latency causes a large number of packets to be dropped. To minimize traffic loss, BFD can be configured to rapidly detect the fault in the link between P2 and the switch. After a BFD session detects the fault, it advertises the fault to trigger TE FRR switching.
Configuration Roadmap
The configuration roadmap is as follows:
Configure an IP address for each interface and enable IGP on each LSR so that LSRs can communicate. Enable IGP GR to support RSVP GR.
Configure the MPLS network and basic MPLS TE functions.
Configure explicit paths for the primary and bypass tunnels.
Create the primary tunnel interface and enable TE FRR on PE1. Configure the bypass tunnel on P1.
Configure BFD for RSVP on P1 and P2.
Data Preparation
To complete the configuration, you need the following data:
IGP protocol and parameters
MPLS LSR IDs
Bandwidth attributes of the outbound interfaces of links along the tunnel
Primary tunnel interface number and explicit path
Bypass tunnel interface number and explicit path
Physical interfaces to be protected by the bypass tunnel
Minimum intervals at which BFD packets are sent and received
Local BFD detection multiplier
Procedure
- Assign an IP address to each interface.
Assign an IP address to each interface according to Figure 1-2257, create loopback interfaces on LSRs, and configure the loopback interface addresses as MPLS LSR IDs. For configuration details, see Configuration Files in this section.
- Configure the switch.
Configure the switch so that P1 and P2 can communicate. For configuration details, see Configuration Files in this section.
- Configure an IGP.
Configure OSPF or IS-IS on each LSR so that LSRs can communicate. In this example, IS-IS is used. For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions.
Configure the LSR ID and enable MPLS in the system and interface views on each LSR. For configuration details, see Configuration Files in this section.
- Configure basic MPLS TE functions.
Enable MPLS TE and MPLS RSVP-TE in the MPLS and interface views on each LSR. For configuration details, see Configuration Files in this section.
- Configure IS-IS TE and CSPF.
Enable IS-IS TE on each node and configure CSPF on PE1 and PE2. For configuration details, see Configuration Files in this section.
- Configure the primary tunnel.
# Specify an explicit path for the primary tunnel on PE1.
<PE1> system-view
[~PE1] explicit-path tope2
[*PE1-explicit-path-tope2] next hop 10.1.1.2
[*PE1-explicit-path-tope2] next hop 10.2.1.2
[*PE1-explicit-path-tope2] next hop 10.4.1.2
[*PE1-explicit-path-tope2] next hop 5.5.5.5
[*PE1-explicit-path-tope2] commit
[~PE1-explicit-path-tope2] quit
# Create a tunnel interface on PE1, specify an explicit path, and enable TE FRR.
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] ip address unnumbered interface loopback 1
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 5.5.5.5
[*PE1-Tunnel10] mpls te tunnel-id 100
[*PE1-Tunnel10] mpls te path explicit-path tope2
[*PE1-Tunnel10] mpls te fast-reroute
[*PE1-Tunnel10] commit
[~PE1-Tunnel10] quit
# Run the display mpls te tunnel-interface tunnel command on PE1. The status of Tunnel 10 on PE1 is Up.
- Configure the bypass tunnel.
# Specify the explicit path for the bypass tunnel on P1.
<P1> system-view
[~P1] explicit-path tope2
[*P1-explicit-path-tope2] next hop 10.3.1.2
[*P1-explicit-path-tope2] next hop 10.5.1.2
[*P1-explicit-path-tope2] next hop 5.5.5.5
[*P1-explicit-path-tope2] commit
[~P1-explicit-path-tope2] quit
# Configure a bypass tunnel interface and specify an explicit path for the bypass tunnel on P1. Specify the physical interface to be protected by the bypass tunnel.
[~P1] interface Tunnel 30
[*P1-Tunnel30] ip address unnumbered interface loopback 1
[*P1-Tunnel30] tunnel-protocol mpls te
[*P1-Tunnel30] destination 5.5.5.5
[*P1-Tunnel30] mpls te tunnel-id 300
[*P1-Tunnel30] mpls te path explicit-path tope2
[*P1-Tunnel30] mpls te bypass-tunnel
[*P1-Tunnel30] mpls te protected-interface gigabitethernet 0/1/8
[*P1-Tunnel30] commit
[~P1-Tunnel30] quit
- Configure BFD for RSVP.
# Enable BFD for RSVP on GE 0/1/8 on P1 and P2. Set the minimum intervals at which BFD packets are sent and received, and the local BFD detection multiplier.
# Configure P1.
[~P1] bfd
[*P1-bfd] quit
[*P1] interface gigabitethernet 0/1/8
[*P1-GigabitEthernet0/1/8] mpls rsvp-te bfd enable
[*P1-GigabitEthernet0/1/8] mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*P1-GigabitEthernet0/1/8] commit
[~P1-GigabitEthernet0/1/8] quit
# Configure P2.
[~P2] bfd
[*P2-bfd] quit
[*P2] interface gigabitethernet 0/1/8
[*P2-GigabitEthernet0/1/8] mpls rsvp-te bfd enable
[*P2-GigabitEthernet0/1/8] mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*P2-GigabitEthernet0/1/8] commit
[~P2-GigabitEthernet0/1/8] quit
- Verify the configuration.# After the configuration is complete, run the display mpls rsvp-te bfd session { all | interface interface-type interface-number | peer ip-address } [ verbose ] command on P1 and P2. The BFD session status is up. The following example uses the configuration on P1.
<P1> display mpls rsvp-te bfd session all
Total Nbrs/Rsvp triggered sessions : 3/1 ------------------------------------------------------------------------------- Local Remote Local Peer Interface Session Discr Discr Addr Addr Name State ------------------------------------------------------------------------------- 16385 16385 10.2.1.1 10.2.1.2 GE0/1/8 UP
Configuration Files
The switch configuration file is not provided here.
PE1 configuration file
# sysname PE1 # mpls lsr-id 1.1.1.1 # mpls mpls te mpls rsvp-te mpls rsvp-te hello mpls te cspf # explicit-path tope2 next hop 10.1.1.2 next hop 10.2.1.2 next hop 10.4.1.2 next hop 5.5.5.5 # isis 1 is-level level-2 cost-style wide network-entity 86.4501.0010.0100.1001.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 5.5.5.5 mpls te record-route label mpls te fast-reroute mpls te tunnel-id 100 mpls te path explicit-path tope2 # return
P1 configuration file
# sysname P1 # bfd # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te mpls rsvp-te bfd all-interfaces enable mpls rsvp-te bfd all-interfaces min-tx-interval 100 min-rx-interval 100 detect-multiplier 4 mpls rsvp-te hello mpls te cspf # explicit-path tope2 next hop 10.3.1.2 next hop 10.5.1.2 next hop 5.5.5.5 # isis 1 is-level level-2 cost-style wide network-entity 86.4501.0020.0200.2002.00 traffic-eng level-2 # interface GigabitEthernet0/1/16 undo shutdown # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te bfd enable mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 mpls rsvp-te hello # interface GigabitEthernet0/1/16 undo shutdown ip address 10.3.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface Tunnel30 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 5.5.5.5 mpls te record-route mpls te tunnel-id 300 mpls te path explicit-path tope2 mpls te bypass-tunnel mpls te protected-interface Gigabitethernet 0/1/8 # return
P2 configuration file
# sysname P2 # bfd # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te mpls rsvp-te hello mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 86.4501.0030.0300.3003.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.4.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te bfd enable mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 mpls rsvp-te hello # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # return
P3 configuration file
# sysname P3 # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls rsvp-te hello mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 86.4501.0040.0400.4004.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface GigabitEthernet0/1/8 undo shutdown ip address 10.5.1.1 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # return
PE2 configuration file
# sysname PE2 # mpls lsr-id 5.5.5.5 # mpls mpls te mpls rsvp-te mpls rsvp-te hello mpls te cspf # isis 1 is-level level-2 cost-style wide network-entity 86.4501.0050.0500.5005.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.4.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface GigabitEthernet0/1/8 undo shutdown ip address 10.5.1.2 255.255.255.252 isis enable 1 mpls mpls te mpls rsvp-te mpls rsvp-te hello # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 isis enable 1 # return
Example for Configuring a P2MP TE Tunnel
This section provides an example for configuring a P2MP TE tunnel on an IP/MPLS backbone network.
Networking Requirements
The IP multicast service bearer technology used on the current IP/MPLS backbone network relies on the IP unicast technology. Like IP unicast, IP multicast fails to provide sufficient bandwidth, QoS capabilities, reliability and high real-time performance for multicast services such as IPTV and massively multiplayer online role-playing games (MMORPGs). A P2MP TE tunnel can solve this problem. A P2MP TE tunnel can be configured on a live IP/MPLS backbone network, and supports the P2MP TE FRR function, meeting multicast service requirements.
A P2MP TE tunnel is established on the network shown in Figure 1-2258. LSRA is the tunnel ingress. LSRC, LSRE, and LSRF are leaf nodes, and the tunnel bandwidth is 1,000 kbit/s.
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface and configure a loopback interface address as an LSR ID on each node.
Configure Intermediate System to Intermediate System (IS-IS) to advertise the route to each network segment to which each interface is connected and the host route to each loopback interface address that is an LSR ID.
Enable MPLS, MPLS TE, P2MP TE, and MPLS RSVP globally on each node and constraint shortest path first (CSPF) on the ingress to enable all nodes to have MPLS forwarding capabilities.
Enable the IS-IS TE capability to ensure that MPLS TE can advertise information about link status.
Enable the MPLS TE capability on the interfaces of each node and configure link attributes for the interfaces so that the interfaces can send RSVP signaling packets.
Configure explicit paths and a leaf list on the ingress LSRA to specify the leaf nodes on the P2MP TE tunnel.
Configure a P2MP TE tunnel interface on LSRA to ensure that the ingress establishes a P2MP TE tunnel based on all configuration information on the interface.
Data Preparation
To complete the configuration, you need the following data:
IP addresses of all interfaces shown in Figure 1-2258
IS-IS (used as an IGP protocol), IS-IS process ID (1), IS-IS system ID of each node (obtained by translating the IP address of loopback1 of each node), and IS-IS level (Level-2)
MPLS LSR ID of each node using the corresponding loopback interface address
Maximum reservable bandwidth (10000 kbit/s) of the outbound interface along the path and BC0 bandwidth (10000 kbit/s)
Name of an explicit path used by each leaf node (toLSRB, toLSRE, and toLSRF), name of the leaf list (iptv1), and addresses of each leaf node (MPLS LSR ID of each leaf node)
Tunnel interface number (Tunnel 10), tunnel ID (100), loopback interface address used as the IP address of the tunnel interface, and tunnel bandwidth (1000 kbit/s)
Procedure
- Assign an IP address to each interface.
Assign an IP address to each interface according to Figure 1-2258 and create a loopback interface on each node. For configuration details, see Configuration Files in this section.
- Configure IS-IS to advertise the route to each network segment to which each interface is connected and to advertise the host route to each LSR ID.
Configure IS-IS on each node to implement network layer connectivity. For configuration details, see Configuration Files in this section.
- Enable MPLS, MPLS TE, P2MP TE, and MPLS RSVP globally on each node and CSPF on the ingress.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls te p2mp-te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls te p2mp-te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] commit
[~LSRB-mpls] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls te p2mp-te
[*LSRC-mpls] mpls rsvp-te
[*LSRC-mpls] commit
[~LSRC-mpls] quit
# Configure LSRD.
<LSRD> system-view
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls te p2mp-te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] commit
[~LSRD-mpls] quit
# Configure LSRE.
<LSRE> system-view
[~LSRE] mpls lsr-id 5.5.5.5
[*LSRE] mpls
[*LSRE-mpls] mpls te
[*LSRE-mpls] mpls te p2mp-te
[*LSRE-mpls] mpls rsvp-te
[*LSRE-mpls] commit
[~LSRE-mpls] quit
# Configure LSRF.
<LSRF> system-view
[~LSRF] mpls lsr-id 6.6.6.6
[*LSRF] mpls
[*LSRF-mpls] mpls te
[*LSRF-mpls] mpls te p2mp-te
[*LSRF-mpls] mpls rsvp-te
[*LSRF-mpls] commit
[~LSRF-mpls] quit
- Enable IS-IS TE on each node.
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
# Configure LSRE.
[~LSRE] isis 1
[~LSRE-isis-1] cost-style wide
[*LSRE-isis-1] traffic-eng level-2
[*LSRE-isis-1] commit
[~LSRE-isis-1] quit
# Configure LSRF.
[~LSRF] isis 1
[~LSRF-isis-1] cost-style wide
[*LSRF-isis-1] traffic-eng level-2
[*LSRF-isis-1] commit
[~LSRF-isis-1] quit
- Enable the MPLS TE capability on the interface of each node, and configure link attributes for the interfaces.
# Configure LSRA.
<LSRA> system-view
[~LSRA] interface gigabitethernet 0/1/1
[~LSRA-GigabitEthernet0/1/1] mpls
[*LSRA-GigabitEthernet0/1/1] mpls te
[*LSRA-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRA-GigabitEthernet0/1/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRA-GigabitEthernet0/1/1] mpls te bandwidth bc0 10000
[*LSRA-GigabitEthernet0/1/1] commit
[~LSRA-GigabitEthernet0/1/1] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] interface gigabitethernet 0/1/0
[~LSRB-GigabitEthernet0/1/0] mpls
[*LSRB-GigabitEthernet0/1/0] mpls te
[*LSRB-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 10000
[~LSRB-GigabitEthernet0/1/0] mpls te bandwidth bc0 10000
[~LSRB-GigabitEthernet0/1/0] quit
[*LSRB] interface gigabitethernet 0/1/2
[*LSRB-GigabitEthernet0/1/2] mpls
[*LSRB-GigabitEthernet0/1/2] mpls te
[*LSRB-GigabitEthernet0/1/2] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRB-GigabitEthernet0/1/2] mpls te bandwidth bc0 10000
[*LSRB-GigabitEthernet0/1/2] quit
[*LSRB] interface gigabitethernet 0/1/1
[*LSRB-GigabitEthernet0/1/1] mpls
[*LSRB-GigabitEthernet0/1/1] mpls te
[*LSRB-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRB-GigabitEthernet0/1/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRB-GigabitEthernet0/1/1] mpls te bandwidth bc0 10000
[*LSRB-GigabitEthernet0/1/1] commit
[~LSRB-GigabitEthernet0/1/1] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] interface gigabitethernet 0/1/2
[~LSRC-GigabitEthernet0/1/2] mpls
[*LSRC-GigabitEthernet0/1/2] mpls te
[*LSRC-GigabitEthernet0/1/2] mpls rsvp-te
[*LSRC-GigabitEthernet0/1/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRC-GigabitEthernet0/1/2] mpls te bandwidth bc0 10000
[*LSRC-GigabitEthernet0/1/2] commit
[~LSRC-GigabitEthernet0/1/2] quit
# Configure LSRD.
<LSRD> system-view
[~LSRD] interface gigabitethernet 0/1/0
[~LSRD-GigabitEthernet0/1/0] mpls
[*LSRD-GigabitEthernet0/1/0] mpls te
[*LSRD-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet0/1/0] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet0/1/0] quit
[*LSRD] interface gigabitethernet 0/1/2
[*LSRD-GigabitEthernet0/1/2] mpls
[*LSRD-GigabitEthernet0/1/2] mpls te
[*LSRD-GigabitEthernet0/1/2] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet0/1/2] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet0/1/2] quit
[*LSRD] interface gigabitethernet 0/1/1
[*LSRD-GigabitEthernet0/1/1] mpls
[*LSRD-GigabitEthernet0/1/1] mpls te
[*LSRD-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRD-GigabitEthernet0/1/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet0/1/1] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet0/1/1] commit
[~LSRD-GigabitEthernet0/1/1] quit
# Configure LSRE.
<LSRE> system-view
[~LSRE] interface gigabitethernet 0/1/0
[~LSRE-GigabitEthernet0/1/0] mpls
[*LSRE-GigabitEthernet0/1/0] mpls te
[*LSRE-GigabitEthernet0/1/0] mpls rsvp-te
[*LSRE-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRE-GigabitEthernet0/1/0] mpls te bandwidth bc0 10000
[*LSRE-GigabitEthernet0/1/0] commit
[~LSRE-GigabitEthernet0/1/0] quit
# Configure LSRF.
<LSRF> system-view
[~LSRF] interface gigabitethernet 0/1/1
[~LSRF-GigabitEthernet0/1/1] mpls
[*LSRF-GigabitEthernet0/1/1] mpls te
[*LSRF-GigabitEthernet0/1/1] mpls rsvp-te
[*LSRF-GigabitEthernet0/1/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRF-GigabitEthernet0/1/1] mpls te bandwidth bc0 10000
[*LSRF-GigabitEthernet0/1/1] commit
[~LSRF-GigabitEthernet0/1/1] quit
- Configure explicit paths and a leaf list on the ingress LSRA.
# Configure explicit paths on LSRA to LSRC, LSRE, and LSRF.
[~LSRA] explicit-path tolsrc
[*LSRA-explicit-path-tolsrc] next hop 10.1.1.2
[*LSRA-explicit-path-tolsrc] next hop 10.3.1.2
[*LSRA-explicit-path-tolsrc] quit
[*LSRA] explicit-path tolsrf
[*LSRA-explicit-path-tolsrf] next hop 10.1.1.2
[*LSRA-explicit-path-tolsrf] next hop 10.2.1.2
[*LSRA-explicit-path-tolsrf] next hop 10.5.1.2
[*LSRA-explicit-path-tolsrf] commit
[*LSRA-explicit-path-tolsrf] quit
[*LSRA] explicit-path tolsre
[*LSRA-explicit-path-tolsre] next hop 10.1.1.2
[*LSRA-explicit-path-tolsre] next hop 10.2.1.2
[*LSRA-explicit-path-tolsre] next hop 10.4.1.2
[*LSRA-explicit-path-tolsre] commit
[~LSRA-explicit-path-tolsre] quit
# Configure a leaf list iptv1 on LSRA and add leaf node addresses to the leaf list.
[~LSRA] mpls te leaf-list iptv1
[*LSRA-mpls-te-leaf-list-iptv1] destination 3.3.3.3
[*LSRA-mpls-te-leaf-list-iptv1-destination-3.3.3.3] path explicit-path tolsrc
[*LSRA-mpls-te-leaf-list-iptv1-destination-3.3.3.3] quit
[*LSRA-mpls-te-leaf-list-iptv1] destination 5.5.5.5
[*LSRA-mpls-te-leaf-list-iptv1-destination-5.5.5.5] path explicit-path tolsre
[*LSRA-mpls-te-leaf-list-iptv1-destination-5.5.5.5] quit
[*LSRA-mpls-te-leaf-list-iptv1] destination 6.6.6.6
[*LSRA-mpls-te-leaf-list-iptv1-destination-6.6.6.6] path explicit-path tolsrf
[*LSRA-mpls-te-leaf-list-iptv1-destination-6.6.6.6] commit
[~LSRA-mpls-te-leaf-list-iptv1-destination-6.6.6.6] quit
- Configure the P2MP TE tunnel interface on the ingress LSRA.
# Configure LSRA.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] mpls te p2mp-mode
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te leaf-list iptv1
[*LSRA-Tunnel10] mpls te bandwidth ct0 1000
[*LSRA-Tunnel10] commit
[~LSRA-Tunnel10] quit
The P2MP TE tunnel configuration is complete after this step is performed.
- Verify the configuration.After completing the configurations, run the display mpls te p2mp tunnel-interface Tunnel10 command on LSRA. The status of Tunnel 10 on LSRA is UP, and the status of all sub-LSPs is UP.
[~LSRA] display mpls te p2mp tunnel-interface Tunnel10
------------------------------------------------------------------------------ Tunnel10 ------------------------------------------------------------------------------ Tunnel State : UP Session ID : 100 Ingress LSR ID : 1.1.1.1 P2MP ID : 0x1010101 Admin State : UP Oper State : UP Primary LSP State : UP ------------------------------------------------------------------------------ Main LSP State : UP LSP ID : 8 ------------------------------------------------------------------------------ S2L Dest Addr : 3.3.3.3 State : UP S2L Dest Addr : 5.5.5.5 State : UP S2L Dest Addr : 6.6.6.6 State : UP
Configuration Files
LSRA configuration file
# sysname LSRA # mpls lsr-id 1.1.1.1 # mpls mpls te mpls te p2mp-te mpls rsvp-te mpls te cspf # explicit-path tolsrc next hop 10.1.1.2 next hop 10.3.1.2 # explicit-path tolsrf next hop 10.1.1.2 next hop 10.2.1.2 next hop 10.5.1.2 # explicit-path tolsre next hop 10.1.1.2 next hop 10.2.1.2 next hop 10.4.1.2 # mpls te leaf-list iptv1 # destination 3.3.3.3 path explicit-path tolsrc # destination 5.5.5.5 path explicit-path tolsre # destination 6.6.6.6 path explicit-path tolsrf # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0001.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te mpls te p2mp-mode mpls te bandwidth ct0 1000 mpls te leaf-list iptv1 mpls te tunnel-id 100 # return
LSRB configuration file
# sysname LSRB # mpls lsr-id 2.2.2.2 # mpls mpls te mpls te p2mp-te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0002.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.2.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.3.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # return
LSRC configuration file
# sysname LSRC # mpls lsr-id 3.3.3.3 # mpls mpls te mpls te p2mp-te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0003.00 traffic-eng level-2 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.3.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # return
LSRD configuration file
# sysname LSRD # mpls lsr-id 4.4.4.4 # mpls mpls te mpls te p2mp-te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0004.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.2.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/2 undo shutdown ip address 10.4.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface GigabitEthernet0/1/1 undo shutdown ip address 10.5.1.1 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # return
LSRE configuration file
# sysname LSRE # mpls lsr-id 5.5.5.5 # mpls mpls te mpls te p2mp-te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0005.00 traffic-eng level-2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.4.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 isis enable 1 # return
LSRF configuration file
# sysname LSRF # mpls lsr-id 6.6.6.6 # mpls mpls te mpls te p2mp-te mpls rsvp-te # isis 1 is-level level-2 cost-style wide network-entity 00.0005.0000.0000.0006.00 traffic-eng level-2 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.5.1.2 255.255.255.0 isis enable 1 mpls mpls te mpls te bandwidth max-reservable-bandwidth 10000 mpls te bandwidth bc0 10000 mpls rsvp-te # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 isis enable 1 # return
Example for Configuring the IETF DS-TE Mode (RDM)
This section provides an example for configuring the IETF DS-TE mode.
Networking Requirements
In Figure 1-2259, PEs and the P node run IS-IS to implement connectivity between one another. The P node does not support MPLS LDP. PE1 and PE2 access both VPN-A and VPN-B. An LDP LSP originates from PE3 and is terminated at PE4 through a path PE1 > P > PE2.
VPN-A transmits AF2 and AF1 traffic. VPN-B transmits AF2, AF1, and BE traffic. The LDP LSP transmits BE traffic. QoS requirements of each type of traffic are as follows.
Data Flow |
Bandwidth |
---|---|
VPN-A AF2 traffic |
100 Mbit/s |
VPN-A AF1 traffic |
50 Mbit/s |
VPN-B AF2 traffic |
100 Mbit/s |
VPN-B AF1 traffic |
50 Mbit/s |
VPN-B BE traffic |
50 Mbit/s |
LDP LSP BE traffic |
50 Mbit/s |
A DS-TE tunnel is established between PE1 and PE2 to transfer the preceding types of traffic and satisfy various QoS requirements. The bandwidth constraints model is set to RDM to allow CTi to preempt lower-priority CTj bandwidth (0 <= i < j <= 7) to guarantee higher-priority CT bandwidth.
Configuration Roadmap
The configuration roadmap is as follows:
- Since each tunnel can be configured with a single CT, establish a tunnel for LDP LSPs to carry CT0. Establish two tunnels in VPN-A, with each of them carrying a different CT, namely CT1 and CT2. Establish three tunnels in VPN-B, with each of them carrying a different CT, namely CT0, CT1, and CT2.
- Configure CT0, CT1, and CT2 to carry BE, AF1, and AF2 flows, respectively.
- Since the tunnels pass through the same path, configure the BCi link bandwidth value to be greater than or equal to the sum of CTi through CT7 bandwidth values of all TE tunnels, and configure the maximum link reservable bandwidth to be greater than or equal to the BC0 bandwidth value. Therefore, BC2 bandwidth ≥ Total AF2 bandwidth = 200 Mbit/s; BC1 bandwidth ≥ (BC2 bandwidth + Total AF1 bandwidth) = 300 Mbit/s; reservable link bandwidth ≥ BC0 bandwidth ≥ (BC1 bandwidth + Total BE bandwidth) = 400 Mbit/s.
- Use a CT template to configure TE tunnels because the same type of service in different tunnels has the same bandwidth requirement.
Data Preparation
To complete the configuration, you need the following data:
- LSR IDs of PEs and the P
- Number of each MPLS TE tunnel interface
- TE-class mapping table
- Maximum reservable bandwidth value and each BC bandwidth value of each link
- VPN-A's and VPN-B's VPN instance names, route distinguishers, VPN-Targets, and tunnel policy name
Procedure
- Assign an IP address to each interface on the PEs and P and configure IS-IS to implement connectivity between the PEs and P.
For configuration details, see Configuration Files in this section.
After the configuration, IS-IS neighbor relationships can be established between PE1, P, and PE2. Run the display ip routing-table command. The command output shows that the PEs have learned the routes to Loopback 1 of each other.
- Configure an LSR ID and enable MPLS on each node; enable MPLS TE and RSVP-TE on PE1, PE2, and the P; enable MPLS LDP on each PE.
# Configure PE3.
<PE3> system-view
[~PE3] mpls lsr-id 4.4.4.9 [*PE3] mpls [*PE3] commit [~PE3-mpls] quit [*PE3] mpls ldp [*PE3] commit [~PE3-mpls-ldp] quit [~PE3] interface gigabitethernet 0/1/0 [*PE3-GigabitEthernet0/1/0] mpls [*PE3-GigabitEthernet0/1/0] mpls ldp [*PE3-GigabitEthernet0/1/0] commit [~PE3-GigabitEthernet0/1/0] quit
# Configure PE1.
<PE1> system-view
[~PE1] mpls lsr-id 1.1.1.9 [*PE1] mpls [*PE1-mpls] mpls te [*PE1-mpls] mpls rsvp-te [*PE1-mpls] commit [~PE1-mpls] quit [*PE1] mpls ldp [*PE1-mpls-ldp] commit [~PE1-mpls-ldp] quit [~PE1] interface gigabitethernet 0/1/16 [*PE1-GigabitEthernet0/1/16] mpls [*PE1-GigabitEthernet0/1/16] mpls te [*PE1-GigabitEthernet0/1/16] mpls rsvp-te [*PE1-GigabitEthernet0/1/16] commit [~PE1-GigabitEthernet0/1/16] quit [~PE1] interface gigabitethernet 0/1/24 [*PE1-GigabitEthernet0/1/24] mpls [*PE1-GigabitEthernet0/1/24] mpls ldp [*PE1-GigabitEthernet0/1/24] commit [~PE1-GigabitEthernet0/1/24] quit
# Configure the P.
<P> system-view
[~P] mpls lsr-id 2.2.2.9 [*P] mpls [*P-mpls] mpls te [*P-mpls] mpls rsvp-te [*P-mpls] commit [~P-mpls] quit [~P] interface gigabitethernet 0/1/0 [*P-GigabitEthernet0/1/0] mpls [*P-GigabitEthernet0/1/0] mpls te [*P-GigabitEthernet0/1/0] mpls rsvp-te [*P-GigabitEthernet0/1/0] commit [~P-GigabitEthernet0/1/0] quit [~P] interface gigabitethernet 0/1/8 [*P-GigabitEthernet0/1/8] mpls [*P-GigabitEthernet0/1/8] mpls te [*P-GigabitEthernet0/1/8] mpls rsvp-te [*P-GigabitEthernet0/1/8] commit [~P-GigabitEthernet0/1/8] quit
# Configure PE2.
<PE2> system-view
[~PE2] mpls lsr-id 3.3.3.9 [*PE2] mpls [*PE2-mpls] mpls te [*PE2-mpls] mpls rsvp-te [*PE2-mpls] commit [~PE2-mpls] quit [*PE2] mpls ldp [*PE2-mpls] commit [~PE2-mpls] quit [~PE2] interface gigabitethernet 0/1/16 [*PE2-GigabitEthernet0/1/16] mpls [*PE2-GigabitEthernet0/1/16] mpls te [*PE2-GigabitEthernet0/1/16] mpls rsvp-te [*PE2-GigabitEthernet0/1/16] commit [~PE2-GigabitEthernet0/1/16] quit [~PE2] interface gigabitethernet 0/1/24 [*PE2-GigabitEthernet0/1/24] mpls [*PE2-GigabitEthernet0/1/24] mpls ldp [*PE2-GigabitEthernet0/1/24] commit [~PE2-GigabitEthernet0/1/24] quit
# Configure PE4.
<PE4> system-view
[~PE4] mpls lsr-id 5.5.5.9 [*PE4] mpls [*PE4-mpls] commit [~PE4-mpls] quit [*PE4] mpls ldp [*PE4-mpls] commit [~PE4-mpls-ldp] quit [~PE4] interface gigabitethernet 0/1/0 [*PE4-GigabitEthernet0/1/0] mpls [*PE4-GigabitEthernet0/1/0] mpls ldp [*PE4-GigabitEthernet0/1/0] commit [~PE4-GigabitEthernet0/1/0] quit
After completing the configuration, run the display mpls rsvp-te interface command on PE1, PE2, or the P to view RSVP interface information and RSVP information. Run the display mpls ldp lsp command on PE1, PE2, PE3, or PE4. The command output shows that an LDP LSP has been established between the pair of PE3 and PE1 and that of PE2 and PE4.
- Configure IS-IS TE and enable CSPF on PE1, PE2, and the P.
# Enable IS-IS TE on all nodes and enable CSPF on the ingress of the TE tunnel.
# Configure PE1.
[~PE1] isis 1 [~PE1-isis-1] cost-style wide [*PE1-isis-1] traffic-eng level-1-2 [*PE1-isis-1] commit [~PE1-isis-1] quit [~PE1] mpls [~PE1-mpls] mpls te cspf [*PE1-mpls] commit
# Configure the P.
[~P] isis 1 [~P-isis-1] cost-style wide [*P-isis-1] traffic-eng level-1-2 [*P-isis-1] commit [~P-isis-1] quit
# Configure PE2.
[~PE2] isis 1 [~PE2-isis-1] cost-style wide [*PE2-isis-1] traffic-eng level-1-2 [*PE2-isis-1] commit [~PE2-isis-1] quit [~PE2] mpls [~PE2-mpls] mpls te cspf [*PE2-mpls] commit [~PE2-mpls] quit
After completing the configuration, run the display isis lsdb command on a PE or the P. The command output shows that the IS-IS link status information.
- Configure a DS-TE mode and a BCM on PE1, PE2, and the P.
# Configure PE1.
[~PE1] mpls [~PE1-mpls] mpls te ds-te mode ietf [*PE1-mpls] mpls te ds-te bcm rdm [*PE1-mpls] commit [~PE1-mpls] quit
# Configure the P.
[~P] mpls [~P-mpls] mpls te ds-te mode ietf [*P-mpls] mpls te ds-te bcm rdm [*P-mpls] commit [~P-mpls] quit
# Configure PE2.
[~PE2] mpls [~PE2-mpls] mpls te ds-te mode ietf [*PE2-mpls] mpls te ds-te bcm rdm [*PE2-mpls] commit [~PE2-mpls] quit
After completing the configuration, run the display mpls te ds-te summary command on a PE or the P to view DS-TE configurations. The following example uses the command output on PE1.
[~PE1] display mpls te ds-te summary DS-TE IETF Supported :YES DS-TE MODE :IETF Bandwidth Constraint Model :RDM TEClass Mapping (configured): TE-Class ID Class Type Priority TE-Class 0 0 0 TE-Class 1 1 0 TE-Class 2 2 0 TE-Class 3 3 0 TE-Class 4 0 7 TE-Class 5 1 7 TE-Class 6 2 7 TE-Class 7 3 7
- Configure link bandwidth values on PE1, PE2, and the P.
# Configure PE1.
[~PE1] interface gigabitethernet 0/1/16 [~PE1-GigabitEthernet0/1/16] mpls te bandwidth max-reservable-bandwidth 400000 [*PE1-GigabitEthernet0/1/16] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 [*PE1-GigabitEthernet0/1/16] commit [~PE1-GigabitEthernet0/1/16] quit
# Configure the P.
[~P] interface gigabitethernet 0/1/0 [~P-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 400000 [*P-GigabitEthernet0/1/0] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 [*P-GigabitEthernet0/1/0] commit [~P-GigabitEthernet0/1/0] quit [~P] interface gigabitethernet 0/1/8 [~P-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 400000 [*P-GigabitEthernet0/1/8] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 [~P-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] interface gigabitethernet 0/1/16 [~PE2-GigabitEthernet0/1/16] mpls te bandwidth max-reservable-bandwidth 400000 [*PE2-GigabitEthernet0/1/16] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 [~PE2-GigabitEthernet0/1/16] quit
After completing the configuration, run the display mpls te link-administration bandwidth-allocation interface gigabitethernet command on a PE to view BC bandwidth allocation information. The following example uses the command output on PE1.
[~PE1] display mpls te link-administration bandwidth-allocation interface gigabitethernet 0/1/16 Link ID: GigabitEthernet0/1/16 Bandwidth Constraint Model : Russian Dolls Model (RDM) Physical Link Bandwidth(Kbits/sec) : - Maximum Link Reservable Bandwidth(Kbit/sec): 400000 Reservable Bandwidth BC0(Kbit/sec) : 400000 Reservable Bandwidth BC1(Kbit/sec) : 300000 Reservable Bandwidth BC2(Kbit/sec) : 200000 Reservable Bandwidth BC3(Kbit/sec) : 0 Reservable Bandwidth BC4(Kbit/sec) : 0 Reservable Bandwidth BC5(Kbit/sec) : 0 Reservable Bandwidth BC6(Kbit/sec) : 0 Reservable Bandwidth BC7(Kbit/sec) : 0 Downstream Bandwidth (Kbit/sec) : 0 IPUpdown Link Status : UP PhysicalUpdown Link Status : UP ---------------------------------------------------------------------- TE-CLASS CT PRIORITY BW RESERVED BW AVAILABLE DOWNSTREAM (Kbit/sec) (Kbit/sec) RSVPLSPNODE COUNT ---------------------------------------------------------------------- 0 0 0 0 400000 0 1 1 0 0 300000 0 2 2 0 0 200000 0 3 0 7 0 400000 0 4 1 7 0 300000 0 5 2 7 0 200000 0 6 - - - - - 7 - - - - - 8 - - - - - 9 - - - - - 10 - - - - - 11 - - - - - 12 - - - - - 13 - - - - - 14 - - - - - 15 - - - - - ----------------------------------------------------------------------
- Configure a TE-class mapping table on a PE.
# Configure PE1.
[~PE1] te-class-mapping [~PE1-te-class-mapping] te-class0 class-type ct0 priority 0 description For-BE [*PE1-te-class-mapping] te-class1 class-type ct1 priority 0 description For-AF1 [*PE1-te-class-mapping] te-class2 class-type ct2 priority 0 description For-AF2 [*PE1-te-class-mapping] commit [~PE1-te-class-mapping] quit
# Configure PE2.
[~PE2] te-class-mapping [~PE2-te-class-mapping] te-class0 class-type ct0 priority 0 description For-BE [*PE2-te-class-mapping] te-class1 class-type ct1 priority 0 description For-AF1 [*PE2-te-class-mapping] te-class2 class-type ct2 priority 0 description For-AF2 [*PE2-te-class-mapping] commit [~PE2-te-class-mapping] quit
After completing the configuration, run the display mpls te ds-te te-class-mapping command on a PE to view TE-class mapping table information. The following example uses the command output on PE1.
[~PE1] display mpls te ds-te te-class-mapping TE-Class ID Class Type Priority Description TE-Class0 0 0 For-BE TE-Class1 1 0 For-AF1 TE-Class2 2 0 For-AF2 TE-Class3 - - - TE-Class4 - - - TE-Class5 - - - TE-Class6 - - - TE-Class7 - - -
- Configure an explicit path on PE1 and PE2.
# Configure PE1.
[~PE1] explicit-path path1 [*PE1-explicit-path-path1] next hop 10.10.1.2 [*PE1-explicit-path-path1] next hop 10.11.1.2 [*PE1-explicit-path-path1] next hop 3.3.3.9 [*PE1-explicit-path-path1] commit [~PE1-explicit-path-path1] quit
# Configure PE2.
[~PE2] explicit-path path1 [*PE2-explicit-path-path1] next hop 10.11.1.1 [*PE2-explicit-path-path1] next hop 10.10.1.1 [*PE2-explicit-path-path1] next hop 1.1.1.9 [*PE2-explicit-path-path1] commit [~PE2-explicit-path-path1] quit
After completing the configuration, run the display explicit-path command on a PE to view explicit path information. The following example uses the command output on PE1.
[~PE1] display explicit-path path1 Path Name : path1 Path Status : Enabled 1 10.10.1.2 Strict Include 2 10.11.1.2 Strict Include 3 3.3.3.9 Strict Include
- Configure a tunnel interface on PE1 and PE2.
# Configure PE1.
[~PE1] interface tunnel10 [*PE1-Tunnel10] description For VPN-A & Non-VPN [*PE1-Tunnel10] ip address unnumbered interface loopback 1 [*PE1-Tunnel10] tunnel-protocol mpls te [*PE1-Tunnel10] destination 3.3.3.9 [*PE1-Tunnel10] mpls te tunnel-id 10 [*PE1-Tunnel10] mpls te signal-protocol rsvp-te [*PE1-Tunnel10] mpls te path explicit-path path1 [*PE1-Tunnel10] mpls te priority 0 0 [*PE1-Tunnel10] mpls te bandwidth ct0 50000 [*PE1-Tunnel10] mpls te reserved-for-binding [*PE1-Tunnel10] commit [~PE1-Tunnel10] quit [~PE1] interface tunnel11 [*PE1-Tunnel11] description For VPN-A & Non-VPN [*PE1-Tunnel11] ip address unnumbered interface loopback 1 [*PE1-Tunnel11] tunnel-protocol mpls te [*PE1-Tunnel11] destination 3.3.3.9 [*PE1-Tunnel11] mpls te tunnel-id 11 [*PE1-Tunnel11] mpls te signal-protocol rsvp-te [*PE1-Tunnel11] mpls te path explicit-path path1 [*PE1-Tunnel11] mpls te priority 0 0 [*PE1-Tunnel11] mpls te bandwidth ct1 50000 [*PE1-Tunnel11] mpls te reserved-for-binding [*PE1-Tunnel11] commit [~PE1-Tunnel11] quit [~PE1] interface tunnel12 [*PE1-Tunnel12] description For VPN-A & Non-VPN [*PE1-Tunnel12] ip address unnumbered interface loopback 1 [*PE1-Tunnel12] tunnel-protocol mpls te [*PE1-Tunnel12] destination 3.3.3.9 [*PE1-Tunnel12] mpls te tunnel-id 12 [*PE1-Tunnel12] mpls te signal-protocol rsvp-te [*PE1-Tunnel12] mpls te path explicit-path path1 [*PE1-Tunnel12] mpls te priority 0 0 [*PE1-Tunnel12] mpls te bandwidth ct2 100000 [*PE1-Tunnel12] mpls te reserved-for-binding [*PE1-Tunnel12] commit [~PE1-Tunnel12] quit [~PE1] interface tunnel20 [*PE1-Tunnel20] description For VPN-B [*PE1-Tunnel20] ip address unnumbered interface loopback 1 [*PE1-Tunnel20] tunnel-protocol mpls te [*PE1-Tunnel20] destination 3.3.3.9 [*PE1-Tunnel20] mpls te tunnel-id 20 [*PE1-Tunnel20] mpls te signal-protocol rsvp-te [*PE1-Tunnel20] mpls te path explicit-path path1 [*PE1-Tunnel20] mpls te priority 0 0 [*PE1-Tunnel20] mpls te bandwidth ct0 50000 [*PE1-Tunnel20] mpls te reserved-for-binding [*PE1-Tunnel20] commit [~PE1-Tunnel20] quit [~PE1] interface tunnel21 [*PE1-Tunnel21] description For VPN-B [*PE1-Tunnel21] ip address unnumbered interface loopback 1 [*PE1-Tunnel21] tunnel-protocol mpls te [*PE1-Tunnel21] destination 3.3.3.9 [*PE1-Tunnel21] mpls te tunnel-id 21 [*PE1-Tunnel21] mpls te signal-protocol rsvp-te [*PE1-Tunnel21] mpls te path explicit-path path1 [*PE1-Tunnel21] mpls te priority 0 0 [*PE1-Tunnel21] mpls te bandwidth ct1 50000 [*PE1-Tunnel21] mpls te reserved-for-binding [*PE1-Tunnel21] commit [~PE1-Tunnel21] quit [~PE1] interface tunnel22 [*PE1-Tunnel22] description For VPN-B [*PE1-Tunnel22] ip address unnumbered interface loopback 1 [*PE1-Tunnel22] tunnel-protocol mpls te [*PE1-Tunnel22] destination 3.3.3.9 [*PE1-Tunnel22] mpls te tunnel-id 22 [*PE1-Tunnel22] mpls te signal-protocol rsvp-te [*PE1-Tunnel22] mpls te path explicit-path path1 [*PE1-Tunnel22] mpls te priority 0 0 [*PE1-Tunnel22] mpls te bandwidth ct2 100000 [*PE1-Tunnel22] mpls te reserved-for-binding [*PE1-Tunnel22] commit [~PE1-Tunnel22] quit
# Configure PE2.
[~PE2] interface tunnel10 [*PE2-Tunnel10] description For VPN-A & Non-VPN [*PE2-Tunnel10] ip address unnumbered interface loopback 1 [*PE2-Tunnel10] tunnel-protocol mpls te [*PE2-Tunnel10] destination 1.1.1.9 [*PE2-Tunnel10] mpls te tunnel-id 10 [*PE2-Tunnel10] mpls te signal-protocol rsvp-te [*PE2-Tunnel10] mpls te path explicit-path path1 [*PE2-Tunnel10] mpls te priority 0 0 [*PE2-Tunnel10] mpls te bandwidth ct0 50000 [*PE2-Tunnel10] mpls te reserved-for-binding [*PE2-Tunnel10] commit [~PE2-Tunnel10] quit [~PE2] interface tunnel11 [*PE2-Tunnel11] description For VPN-A & Non-VPN [*PE2-Tunnel11] ip address unnumbered interface loopback 1 [*PE2-Tunnel11] tunnel-protocol mpls te [*PE2-Tunnel11] destination 1.1.1.9 [*PE2-Tunnel11] mpls te tunnel-id 11 [*PE2-Tunnel11] mpls te signal-protocol rsvp-te [*PE2-Tunnel11] mpls te path explicit-path path1 [*PE2-Tunnel11] mpls te priority 0 0 [*PE2-Tunnel11] mpls te bandwidth ct1 50000 [*PE2-Tunnel11] mpls te reserved-for-binding [*PE2-Tunnel11] commit [~PE2-Tunnel11] quit [~PE2] interface tunnel12 [*PE2-Tunnel12] description For VPN-A & Non-VPN [*PE2-Tunnel12] ip address unnumbered interface loopback 1 [*PE2-Tunnel12] tunnel-protocol mpls te [*PE2-Tunnel12] destination 1.1.1.9 [*PE2-Tunnel12] mpls te tunnel-id 12 [*PE2-Tunnel12] mpls te signal-protocol rsvp-te [*PE2-Tunnel12] mpls te path explicit-path path1 [*PE2-Tunnel12] mpls te priority 0 0 [*PE2-Tunnel12] mpls te bandwidth ct2 100000 [*PE2-Tunnel12] mpls te reserved-for-binding [*PE2-Tunnel12] commit [~PE2-Tunnel12] quit [~PE2] interface tunnel20 [*PE2-Tunnel20] description For VPN-B [*PE2-Tunnel20] ip address unnumbered interface loopback 1 [*PE2-Tunnel20] tunnel-protocol mpls te [*PE2-Tunnel20] destination 1.1.1.9 [*PE2-Tunnel20] mpls te tunnel-id 20 [*PE2-Tunnel20] mpls te signal-protocol rsvp-te [*PE2-Tunnel20] mpls te path explicit-path path1 [*PE2-Tunnel20] mpls te priority 0 0 [*PE2-Tunnel20] mpls te bandwidth ct0 50000 [*PE2-Tunnel20] mpls te reserved-for-binding [*PE2-Tunnel20] commit [~PE2-Tunnel20] quit [~PE2] interface tunnel21 [*PE2-Tunnel21] description For VPN-B [*PE2-Tunnel21] ip address unnumbered interface loopback 1 [*PE2-Tunnel21] tunnel-protocol mpls te [*PE2-Tunnel21] destination 1.1.1.9 [*PE2-Tunnel21] mpls te tunnel-id 21 [*PE2-Tunnel21] mpls te signal-protocol rsvp-te [*PE2-Tunnel21] mpls te path explicit-path path1 [*PE2-Tunnel21] mpls te priority 0 0 [*PE2-Tunnel21] mpls te bandwidth ct1 50000 [*PE2-Tunnel21] mpls te reserved-for-binding [*PE2-Tunnel21] commit [~PE2-Tunnel21] quit [~PE2] interface tunnel22 [*PE2-Tunnel22] description For VPN-B [*PE2-Tunnel22] ip address unnumbered interface loopback 1 [*PE2-Tunnel22] tunnel-protocol mpls te [*PE2-Tunnel22] destination 1.1.1.9 [*PE2-Tunnel22] mpls te tunnel-id 2 [*PE2-Tunnel22] mpls te signal-protocol rsvp-te [*PE2-Tunnel22] mpls te path explicit-path path1 [*PE2-Tunnel22] mpls te priority 0 0 [*PE2-Tunnel22] mpls te bandwidth ct2 100000 [*PE2-Tunnel22] mpls te reserved-for-binding [*PE2-Tunnel22] commit [~PE2-Tunnel22] quit
After completing the configuration, run the display interface tunnel interface-number command on a PE to check whether the tunnel interface state is UP. The following example the command output for Tunnel10 of PE1.
[~PE1] display interface tunnel10 Tunnel1 current state : UP(ifindex: 27) Line protocol current state : UP Description: For VPN-A & Non-VPN Route Port,The Maximum Transmit Unit is 1500 Internet Address is unnumbered, using address of LoopBack0(1.1.1.9/32) Encapsulation is TUNNEL, loopback not set Tunnel destination 3.3.3.9 Tunnel up/down statistics 0 Tunnel ct0 bandwidth is 0 Kbit/sec Tunnel protocol/transport MPLS/MPLS, ILM is disabled primary tunnel id is 0x0, secondary tunnel id is 0x0 Current system time: 2017-07-19 06:46:59 0 seconds output rate 0 bits/sec, 0 packets/sec 0 seconds output rate 0 bits/sec, 0 packets/sec 0 packets output, 0 bytes 0 output error 0 output drop Last 300 seconds input utility rate: -- Last 300 seconds output utility rate: --
Run the display mpls te te-class-tunnel command on a PE to view information about a TE tunnel associated with a TE-class. The following example uses the command output on PE1.
[~PE1] display mpls te te-class-tunnel ct0 priority 0 ---------------------------------------------------------- No. CT priority status tunnel name ---------------------------------------------------------- 1 0 0 Valid Tunnel10 2 0 0 Valid Tunnel20
- Establish an MP-IBGP peer relationship between the PEs, and establish EBGP peer relationships between PEs and CEs.
# Configure PE1.
[~PE1] bgp 100 [*PE1-bgp] peer 3.3.3.9 as-number 100 [*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1 [*PE1-bgp] ipv4-family vpnv4 [*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable [*PE1-bgp-af-vpnv4] commit [~PE1-bgp-af-vpnv4] quit [~PE1-bgp] ipv4-family vpn-instance vpna [*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410 [*PE1-bgp-vpna] import-route direct [*PE1-bgp-vpna] commit [~PE1-bgp-vpna] quit [~PE1-bgp] ipv4-family vpn-instance vpnb [*PE1-bgp-vpnb] peer 10.2.1.1 as-number 65420 [*PE1-bgp-vpnb] import-route direct [*PE1-bgp-vpnb] commit [~PE1-bgp-vpnb] quit
The procedures for configuring PE2 are similar to that of PE1. For configuration details, see Configuration Files in this section.
# Configure CE1.
[~CE1] bgp 65410 [*CE1-bgp] peer 10.1.1.2 as-number 100 [*CE1-bgp] import-route direct [*CE1-bgp] commit
After completing the configuration, run the display bgp vpnv4 all peer command on each PE. The command output shows that BGP peer relationships between PEs have been established and are in the Established state.
[~PE1] display bgp vpnv4 all peer BGP local router ID : 1.1.1.9 Local AS number : 100 Total number of peers : 3 Peers in established state : 3 Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv 3.3.3.9 4 100 12 18 0 00:09:38 Established 0 Peer of vpn instance: VPN-Instance vpna, Router ID 1.1.1.9: 10.1.1.1 4 65410 25 25 0 00:17:57 Established 1 VPN-Instance vpnb, Router ID 1.1.1.9: 10.2.1.1 4 65420 21 22 0 00:17:10 Established 1
- Configure tunnel policies on PEs.
# Configure PE1.
[~PE1] tunnel-policy policya [*PE1-tunnel-policy-policya] tunnel binding destination 3.3.3.9 te tunnel 10 tunnel 11 tunnel 12 [*PE1-tunnel-policy-policya] commit [~PE1-tunnel-policy-policya] quit [~PE1] tunnel-policy policyb [*PE1-tunnel-policy-policyb] tunnel binding destination 3.3.3.9 te tunnel 20 tunnel 21 tunnel 22 [*PE1-tunnel-policy-policyb] commit [~PE1-tunnel-policy-policyb] quit
# Configure PE2.
[~PE2] tunnel-policy policya [*PE2-tunnel-policy-policya] tunnel binding destination 1.1.1.9 te tunnel 10 tunnel 11 tunnel 12 [*PE2-tunnel-policy-policya] commit [~PE2-tunnel-policy-policya] quit [~PE2] tunnel-policy policyb [*PE2-tunnel-policy-policyb] tunnel binding destination 1.1.1.9 te tunnel 20 tunnel 21 tunnel 22 [*PE2-tunnel-policy-policyb] commit [~PE2-tunnel-policy-policyb] quit
- Configure VPN instances on each PE and configure CEs to access the PEs.
# Configure PE1.
[~PE1] ip vpn-instance vpna [*PE1-vpn-instance-vpna] ipv4-family [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1 [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both [*PE1-vpn-instance-vpna-af-ipv4] tnl-policy policya [*PE1-vpn-instance-vpna-af-ipv4] commit [~PE1-vpn-instance-vpna-af-ipv4] quit [~PE1-vpn-instance-vpna] quit [~PE1] ip vpn-instance vpnb [*PE1-vpn-instance-vpna] ipv4-family [*PE1-vpn-instance-vpnb-af-ipv4] route-distinguisher 100:2 [*PE1-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both [*PE1-vpn-instance-vpnb-af-ipv4] tnl-policy policyb [*PE1-vpn-instance-vpnb-af-ipv4] commit [~PE1-vpn-instance-vpnb-af-ipv4] quit [~PE1-vpn-instance-vpnb] quit [~PE1] interface gigabitethernet 0/1/0 [*PE1-GigabitEthernet0/1/0] ip binding vpn-instance vpna [*PE1-GigabitEthernet0/1/0] ip address 10.1.1.2 24 [*PE1-GigabitEthernet0/1/0] commit [~PE1-GigabitEthernet0/1/0] quit [*PE1] interface gigabitethernet 0/1/8 [*PE1-GigabitEthernet0/1/8] ip binding vpn-instance vpnb [*PE1-GigabitEthernet0/1/8] ip address 10.2.1.2 24 [*PE1-GigabitEthernet0/1/8] commit [~PE1-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] ip vpn-instance vpna [*PE2-vpn-instance-vpna] ipv4-family [*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1 [*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both [*PE2-vpn-instance-vpna-af-ipv4] tnl-policy policya [*PE2-vpn-instance-vpna-af-ipv4] commit [~PE2-vpn-instance-vpna-af-ipv4] quit [~PE2-vpn-instance-vpna] quit [~PE2] ip vpn-instance vpnb [*PE2-vpn-instance-vpnb] ipv4-family [*PE2-vpn-instance-vpnb-af-ipv4] route-distinguisher 200:2 [*PE2-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both [*PE2-vpn-instance-vpnb-af-ipv4] tnl-policy policyb [*PE2-vpn-instance-vpnb-af-ipv4] commit [~PE2-vpn-instance-vpnb-af-ipv4] quit [~PE2-vpn-instance-vpnb] quit [~PE2] interface gigabitethernet 0/1/0 [*PE2-GigabitEthernet0/1/0] ip binding vpn-instance vpna [*PE2-GigabitEthernet0/1/0] ip address 10.3.1.2 24 [*PE2-GigabitEthernet0/1/0] commit [~PE2-GigabitEthernet0/1/0] quit [~PE2] interface gigabitethernet 0/1/8 [*PE2-GigabitEthernet0/1/8] ip binding vpn-instance vpnb [*PE2-GigabitEthernet0/1/8] ip address 10.4.1.2 24 [*PE2-GigabitEthernet0/1/8] commit [~PE2-GigabitEthernet0/1/8] quit
# Assign IP addresses to the interfaces on CEs. For configuration details, see Configuration Files in this section.
After completing the configuration, run the display ip vpn-instance verbose command on a PE to view the configurations of VPN instances. Each PE can successfully ping its connected CEs.
Configuration Files
PE1 configuration file
# sysname PE1 # ip vpn-instance vpna ipv4-family route-distinguisher 100:1 tnl-policy policya apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # ip vpn-instance vpnb ipv4-family route-distinguisher 100:2 tnl-policy policyb apply-label per-instance vpn-target 222:2 export-extcommunity vpn-target 222:2 import-extcommunity #
mpls lsr-id 1.1.1.9 # mpls mpls te mpls te ds-te mode ietf mpls rsvp-te # mpls ldp # mpls ldp remote-peer pe1tope2 remote-ip 3.3.3.9 # explicit-path path1 next hop 10.10.1.2 next hop 10.11.1.2 next hop 3.3.3.9 # te-class-mapping te-class0 class-type ct0 priority 0 description For-BE te-class1 class-type ct1 priority 0 description For-AF1 te-class2 class-type ct2 priority 0 description For-AF2 # interface Tunnel10 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 10 mpls te priority 0 mpls te bandwidth ct0 50000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel11 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 11 mpls te priority 0 mpls te bandwidth ct1 50000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel12 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 12 mpls te priority 0 mpls te bandwidth ct2 100000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel20 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 20 mpls te priority 0 mpls te bandwidth ct0 50000 mpls te reserved-for-binding mpls te path explicit-path path1 # interface Tunnel21 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 21 mpls te priority 0 mpls te bandwidth ct1 50000 mpls te reserved-for-binding mpls te path explicit-path path1 # interface Tunnel22 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 3.3.3.9 mpls te tunnel-id 22 mpls te priority 0 mpls te bandwidth ct2 100000 mpls te reserved-for-binding mpls te path explicit-path path1 # bgp 100 peer 3.3.3.9 as-number 100 peer 3.3.3.9 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 3.3.3.9 enable # ipv4-family vpnv4 policy vpn-target peer 3.3.3.9 enable # ipv4-family vpn-instance vpna peer 10.1.1.1 as-number 65410 import-route direct # ipv4-family vpn-instance vpnb peer 10.2.1.1 as-number 65420 import-route direct # isis 1 is-level level-1 cost-style wide traffic-eng level-1 # tunnel-policy policya tunnel binding destination 3.3.3.9 te Tunnel10 Tunnel11 Tunnel12 # tunnel-policy policyb tunnel binding destination 3.3.3.9 te Tunnel20 Tunnel21 Tunnel22 # return
P configuration file
# sysname P # mpls lsr-id 2.2.2.9 # mpls mpls te mpls te ds-te mode ietf mpls rsvp-te # interface GigabitEthernet0/1/0 undo shutdown ip address 10.10.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 400000 mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.11.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 400000 mpls te bandwidth bc0 400000 bc1 300000 bc2 200000 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 # isis 1 is-level level-1 cost-style wide traffic-eng level-1 # return
PE2 configuration file
# sysname PE2 # ip vpn-instance vpna ipv4-family route-distinguisher 200:1 tnl-policy policya apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # ip vpn-instance vpnb ipv4-family route-distinguisher 200:2 tnl-policy policyb apply-label per-instance vpn-target 222:2 export-extcommunity vpn-target 222:2 import-extcommunity #
mpls lsr-id 3.3.3.9 # mpls mpls te mpls te ds-te mode ietf mpls te rsvp-te # mpls ldp # mpls ldp remote-peer pe2tope1 remote-ip 1.1.1.9 # explicit-path path1 next hop 10.10.1.1 next hop 10.11.1.1 next hop 1.1.1.9 # te-class-mapping te-class0 class-type ct0 priority 0 description For-BE te-class1 class-type ct1 priority 0 description For-AF1 te-class2 class-type ct2 priority 0 description For-AF2 # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 # interface Tunnel10 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 10 mpls te priority 0 mpls te bandwidth ct0 50000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel11 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 11 mpls te priority 0 mpls te bandwidth ct1 50000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel12 description For VPN-A & Non-VPN ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 12 mpls te priority 0 mpls te bandwidth ct2 100000 mpls te reserved-for-binding mpls te path explicit-path path1 mpls te igp advertise mpls te igp metric absolute 1 # interface Tunnel20 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 20 mpls te priority 0 mpls te bandwidth ct0 50000 mpls te reserved-for-binding mpls te path explicit-path path1 # interface Tunnel21 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 21 mpls te priority 0 mpls te bandwidth ct1 50000 mpls te reserved-for-binding mpls te path explicit-path path1 # interface Tunnel22 description For VPN-B ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te tunnel-id 22 mpls te priority 0 mpls te bandwidth ct2 100000 mpls te reserved-for-binding mpls te path explicit-path path1 # bgp 100 peer 1.1.1.9 as-number 100 peer 1.1.1.9 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.9 enable # ipv4-family vpnv4 policy vpn-target peer 1.1.1.9 enable # ipv4-family vpn-instance vpna peer 10.3.1.1 as-number 65430 import-route direct # ipv4-family vpn-instance vpnb peer 10.4.1.1 as-number 65440 import-route direct # isis 1 is-level level-1 cost-style wide traffic-eng level-1 # tunnel-policy policya tunnel binding destination 1.1.1.9 te Tunnel10 Tunnel11 Tunnel12 # tunnel-policy policyb tunnel binding destination 1.1.1.9 te Tunnel20 Tunnel21 Tunnel22 # return
- PE3 configuration file
# sysname PE3 # mpls lsr-id 4.4.4.9 # mpls # mpls ldp # interface GigabitEthernet0/1/0 undo shutdown ip address 10.5.1.2 255.255.255.0 mpls mpls ldp # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 # isis 1 is-level level-1 cost-style wide traffic-eng level-1 # return
PE4 configuration file
# sysname PE4 # mpls lsr-id 5.5.5.9 # mpls # mpls ldp # interface GigabitEthernet0/1/0 undo shutdown ip address 10.6.1.2 255.255.255.0 mpls mpls ldp # interface LoopBack1 ip address 5.5.5.9 255.255.255.255 # isis 1 is-level level-1 cost-style wide traffic-eng level-1 # return
CE1 configuration file
# sysname CE1 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 # bgp 65410 peer 10.1.1.2 as-number 100 # ipv4-family unicast undo synchronization import-route direct peer 10.1.1.2 enable # return
CE2 configuration file
# sysname CE2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.2.1.1 255.255.255.0 # bgp 65420 peer 10.2.1.2 as-number 100 # ipv4-family unicast undo synchronization import-route direct peer 10.2.1.2 enable # return
CE3 configuration file
# sysname CE3 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.1 255.255.255.0 # bgp 65430 peer 10.3.1.2 as-number 100 # ipv4-family unicast undo synchronization import-route direct peer 10.3.1.2 enable # return
CE4 configuration file
# sysname CE4 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.4.1.1 255.255.255.0 # bgp 65440 peer 10.4.1.2 as-number 100 # ipv4-family unicast undo synchronization import-route direct peer 10.4.1.2 enable # return
Example for Configuring CBTS in an L3VPN over TE Scenario
Networking Requirements
In Figure 1-2260, CE1 and CE2 belong to the same L3VPN. They access the public network through PE1 and PE2 respectively. Various types of services are transmitted between CE1 and CE2. Transmitting a large number of common services deteriorates the efficiency of transmitting important services. To prevent this problem, the CBTS function can be configured. A CBTS allows traffic of a specific service class to be transmitted along a specified tunnel.
In this example, tunnel 1 and tunnel 2 on PE1 transmit important services, and tunnel 3 transmits other packets.
After the CBTS function is configured, do not configure the following services simultaneously:
- Mixed load balancing
- Dynamic load balancing
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address and its mask to every interface and configure a loopback interface address as an LSR ID on every node.
Enable IS-IS globally, configure a network entity title (NET), specify the cost type, and enable IS-IS TE on each involved node. Enable IS-IS on interfaces, including loopback interfaces.
Set MPLS label switching router (LSR) IDs for all devices and globally enable MPLS, MPLS TE, RSVP-TE, and CSPF.
Enable MPLS, MPLS TE, and RSVP-TE, on each interface.
Configure the maximum reservable bandwidth and BC bandwidth for a link on the outbound interface of each node.
Configure a tunnel interface on the ingress and configure the IP address, tunnel protocol, destination IP address, and tunnel bandwidth.
Configure multi-field classification on PE1.
Configure a VPN instance and apply a tunnel policy on PE1.
Data Preparation
To complete the configuration, you need the following data:
IS-IS area ID, originating system ID, and IS-IS level of each node
Maximum available link bandwidth and maximum reservable link bandwidth on each node
Tunnel interface number, IP address, destination IP address, tunnel ID, and tunnel bandwidth on the tunnel interface
Traffic classifier name, traffic behavior name, and traffic policy name
Procedure
- Assign an IP address and a mask to each interface.
Assign the IP address and mask for each interface according to Figure 1-2260. For configuration details, see Configuration Files in this section.
- Configure IS-IS to advertise routes.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] network-entity 00.0005.0000.0000.0001.00
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 0/1/0
[*PE1-GigabitEthernet0/1/0] isis enable 1
[*PE1-GigabitEthernet0/1/0] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] network-entity 00.0005.0000.0000.0002.00
[*P1-isis-1] is-level level-2
[*P1-isis-1] quit
[*P1] interface gigabitethernet 0/1/0
[*P1-GigabitEthernet0/1/0] isis enable 1
[*P1-GigabitEthernet0/1/0] quit
[*P1] interface gigabitethernet 0/1/8
[*P1-GigabitEthernet0/1/8] isis enable 1
[*P1-GigabitEthernet0/1/8] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] network-entity 00.0005.0000.0000.0003.00
[*P2-isis-1] is-level level-2
[*P2-isis-1] quit
[*P2] interface gigabitethernet 0/1/0
[*P2-GigabitEthernet0/1/0] isis enable 1
[*P2-GigabitEthernet0/1/0] quit
[*P2] interface gigabitethernet 0/1/8
[*P2-GigabitEthernet0/1/8] isis enable 1
[*P2-GigabitEthernet0/1/8] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] network-entity 00.0005.0000.0000.0004.00
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 0/1/0
[*PE2-GigabitEthernet0/1/0] isis enable 1
[*PE2-GigabitEthernet0/1/0] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
After completing the preceding configurations, run the display ip routing-table command on each node. The command output shows that they have learned routes from each other. The following example uses the command output on PE1.
[~PE1] display ip routing-table
Route Flags: R - relay, D - download to fib ------------------------------------------------------------------------------ Routing Table : _public_ Destinations : 13 Routes : 13 Destination/Mask Proto Pre Cost Flags NextHop Interface 1.1.1.9/32 Direct 0 0 D 127.0.0.1 LoopBack0 2.2.2.9/32 ISIS 15 10 D 10.1.1.2 GigabitEthernet0/1/0 3.3.3.9/32 ISIS 15 20 D 10.1.1.2 GigabitEthernet0/1/0 4.4.4.9/32 ISIS 15 30 D 10.1.1.2 GigabitEthernet0/1/0 10.1.1.0/24 Direct 0 0 D 10.1.1.1 GigabitEthernet0/1/0 10.1.1.1/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.1.1.255/32 Direct 0 0 D 127.0.0.1 GigabitEthernet0/1/0 10.2.1.0/24 ISIS 15 20 D 10.1.1.2 GigabitEthernet0/1/0 10.3.1.0/24 ISIS 15 30 D 10.1.1.2 GigabitEthernet0/1/0 127.0.0.0/8 Direct 0 0 D 127.0.0.1 InLoopBack0 127.0.0.1/32 Direct 0 0 D 127.0.0.1 InLoopBack0 127.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0 255.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0
- Configure an EBGP peer relationship between each pair of a PE and a CE and an MP-IBGP peer relationship between two PEs.
For configuration details, see Configuration Files in this section.
- Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Enable MPLS, MPLS TE, and RSVP-TE globally and in the interface view on each node, and enable CSPF in the MPLS view of the ingress of a tunnel to be created.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] mpls te cspf
[*PE1-mpls] quit
[*PE1] interface gigabitethernet 0/1/0
[*PE1-GigabitEthernet0/1/0] mpls
[*PE1-GigabitEthernet0/1/0] mpls te
[*PE1-GigabitEthernet0/1/0] mpls rsvp-te
[*PE1-GigabitEthernet0/1/0] commit
[~PE1-GigabitEthernet0/1/0] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet 0/1/0
[*P1-GigabitEthernet0/1/0] mpls
[*P1-GigabitEthernet0/1/0] mpls te
[*P1-GigabitEthernet0/1/0] mpls rsvp-te
[*P1-GigabitEthernet0/1/0] quit
[*P1] interface gigabitethernet 0/1/8
[*P1-GigabitEthernet0/1/8] mpls
[*P1-GigabitEthernet0/1/8] mpls te
[*P1-GigabitEthernet0/1/8] mpls rsvp-te
[*P1-GigabitEthernet0/1/8] commit
[~P1-GigabitEthernet0/1/8] quit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet 0/1/0
[*P2-GigabitEthernet0/1/0] mpls
[*P2-GigabitEthernet0/1/0] mpls te
[*P2-GigabitEthernet0/1/0] mpls rsvp-te
[*P2-GigabitEthernet0/1/0] quit
[*P2] interface gigabitethernet 0/1/8
[*P2-GigabitEthernet0/1/8] mpls
[*P2-GigabitEthernet0/1/8] mpls te
[*P2-GigabitEthernet0/1/8] mpls rsvp-te
[*P2-GigabitEthernet0/1/8] commit
[~P2-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] quit
[*PE2] interface gigabitethernet 0/1/0
[*PE2-GigabitEthernet0/1/0] mpls
[*PE2-GigabitEthernet0/1/0] mpls te
[*PE2-GigabitEthernet0/1/0] mpls rsvp-te
[*PE2-GigabitEthernet0/1/0] commit
[~PE2-GigabitEthernet0/1/0] quit
- Configure IS-IS TE.
# Configure PE1.
[~PE1] isis 1
[~PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure P1.
[~P1] isis 1
[~P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-2
[*P1-isis-1] commit
[~P1-isis-1] quit
# Configure P2.
[~P2] isis 1
[~P2-isis-1] cost-style wide
[*P2-isis-1] traffic-eng level-2
[*P2-isis-1] commit
[~P2-isis-1] quit
# Configure PE2.
[~PE2] isis 1
[~PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] commit
[~PE2-isis-1] quit
- Set MPLS TE bandwidth attributes for links.
# Configure the maximum reservable bandwidth and BC0 bandwidth for the link on the outbound interface of each device along the tunnel. Note that all physical outbound interfaces in the PE1->PE2 and PE2->PE1 directions need to be configured.
# Configure PE1.
[~PE1] interface gigabitethernet 0/1/0
[~PE1-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*PE1-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*PE1-GigabitEthernet0/1/0] commit
[~PE1-GigabitEthernet0/1/0] quit
# Configure P1.
[~P1] interface gigabitethernet 0/1/0
[~P1-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P1-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*P1-GigabitEthernet0/1/0] quit
[*P1] interface gigabitethernet 0/1/8
[*P1-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*P1-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*P1-GigabitEthernet0/1/8] commit
[~P1-GigabitEthernet0/1/8] quit
# Configure P2.
[~P2] interface gigabitethernet 0/1/0
[~P2-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P2-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*P2-GigabitEthernet0/1/0] quit
[*P2] interface gigabitethernet 0/1/8
[*P2-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 100000
[*P2-GigabitEthernet0/1/8] mpls te bandwidth bc0 100000
[*P2-GigabitEthernet0/1/8] commit
[~P2-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] interface gigabitethernet 0/1/0
[~PE2-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 100000
[*PE2-GigabitEthernet0/1/0] mpls te bandwidth bc0 100000
[*PE2-GigabitEthernet0/1/0] commit
[~PE2-GigabitEthernet0/1/0] quit
- Configure QoS on each PE.
# Configure multi-field classification and set a service class for each type of service packet on PE1.
[~PE1] acl 2001
[*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.255.255.255
[*PE1-acl4-basic-2001] quit
[*PE1] acl 2002
[*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.255.255.255
[*PE1-acl4-basic-2002] quit
[*PE1] traffic classifier service1
[*PE1-classifier-service1] if-match acl 2001
[*PE1-classifier-service1] commit
[~PE1-classifier-service1] quit
[~PE1] traffic behavior behavior1
[*PE1-behavior-behavior1] service-class af1 color green
[*PE1-behavior-behavior1] commit
[*PE1-behavior-behavior1] quit
[*PE1] traffic classifier service2
[*PE1-classifier-service2] if-match acl 2002
[*PE1-classifier-service2] commit
[~PE1-classifier-service2] quit
[~PE1] traffic behavior behavior2
[*PE1-behavior-behavior2] service-class af2 color green
[*PE1-behavior-behavior2] commit
[~PE1-behavior-behavior2] quit
[~PE1] traffic policy policy1
[*PE1-trafficpolicy-policy1] classifier service1 behavior behavior1
[*PE1-trafficpolicy-policy1] classifier service2 behavior behavior2
[*PE1-trafficpolicy-policy1] commit
[~PE1-trafficpolicy-policy1] quit
[~PE1] interface gigabitethernet 0/1/8
[~PE1-GigabitEthernet0/1/8] traffic-policy policy1 inbound
[*PE1-GigabitEthernet0/1/8] commit
[~PE1-GigabitEthernet0/1/8] quit
- Configure MPLS TE tunnel interfaces.
# On the ingress of each tunnel, create a tunnel interface and set the IP address, tunnel protocol, destination IP address, tunnel ID, dynamic signaling protocol, tunnel bandwidth, and service classes for packets transmitted on the tunnel.
Run the mpls te service-class { service-class & <1-8> | default } command to configure the service class for packets carried by each tunnel.
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 4.4.4.9
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te bandwidth ct0 20000
[*PE1-Tunnel1] mpls te service-class af1
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
[~PE1] interface tunnel2
[*PE1-Tunnel2] ip address unnumbered interface loopback 1
[*PE1-Tunnel2] tunnel-protocol mpls te
[*PE1-Tunnel2] destination 4.4.4.9
[*PE1-Tunnel2] mpls te tunnel-id 2
[*PE1-Tunnel2] mpls te bandwidth ct0 20000
[*PE1-Tunnel2] mpls te service-class af2
[*PE1-Tunnel2] commit
[~PE1-Tunnel2] quit
[~PE1] interface tunnel3
[*PE1-Tunnel3] ip address unnumbered interface loopback 1
[*PE1-Tunnel3] tunnel-protocol mpls te
[*PE1-Tunnel3] destination 4.4.4.9
[*PE1-Tunnel3] mpls te tunnel-id 3
[*PE1-Tunnel3] mpls te bandwidth ct0 20000
[*PE1-Tunnel3] mpls te service-class default
[~PE1-Tunnel3] commit
[~PE1-Tunnel3] quit
[*PE1] tunnel-policy policy1
[*PE1-tunnel-policy-policy1] tunnel select-seq cr-lsp load-balance-number 3
[*PE1-tunnel-policy-policy1] commit
[~PE1-tunnel-policy-policy1] quit
# Configure PE2.
[~PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.9
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te bandwidth ct0 20000
[*PE2-Tunnel1] commit
[~PE2-Tunnel1] quit
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel select-seq cr-lsp load-balance-number 3
[*PE2-tunnel-policy-policy1] commit
[~PE2-tunnel-policy-policy1] quit
- Configure L3VPN access on each PE.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy policy1
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpn1-af-ipv4] commit
[~PE1-vpn-instance-vpn1-af-ipv4] quit
[~PE1-vpn-instance-vpn1] quit
[~PE1] interface gigabitethernet 0/1/8
[~PE1-GigabitEthernet0/1/8] ip binding vpn-instance vpn1
[~PE1-GigabitEthernet0/1/8] commit
# Configure PE2.
[~PE2] ip vpn-instance vpn2
[*PE2-vpn-instance-vpn2] ipv4-family
[*PE2-vpn-instance-vpn2-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpn1-af-ipv4] tnl-policy policy1
[*PE2-vpn-instance-vpn2-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpn2-af-ipv4] commit
[~PE2-vpn-instance-vpn2-af-ipv4] quit
[~PE2-vpn-instance-vpn2] quit
[~PE2] interface gigabitethernet 0/1/8
[~PE2-GigabitEthernet0/1/8] ip binding vpn-instance vpn2
[~PE2-GigabitEthernet0/1/8] commit
Configuration Files
PE1 configuration file
# sysname PE1 # mpls lsr-id 1.1.1.9 # mpls mpls te mpls te cspf mpls rsvp-te # ip vpn-instance vpn1 ipv4-family route-distinguisher 100:1 tnl-policy policy1 apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0001.00 # acl number 2001 rule 10 permit source 10.40.0.0 0.255.255.255 # acl number 2002 rule 20 permit source 10.50.0.0 0.255.255.255 # traffic classifier service1 if-match acl 2001 # traffic classifier service2 if-match acl 2002 # traffic behavior behavior1 service-class af1 color green # traffic behavior behavior2 service-class af2 color green # traffic policy policy1 classifier service1 behavior behavior1 classifier service2 behavior behavior2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip binding vpn-instance vpn1 ip address 10.10.1.1 255.255.255.0 traffic-policy policy1 inbound # interface LoopBack1 ip address 1.1.1.9 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te bandwidth ct0 20000 mpls te tunnel-id 1 mpls te service-class af1 # interface Tunnel2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te bandwidth ct0 20000 mpls te tunnel-id 2 mpls te service-class af2 # interface Tunnel3 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te bandwidth ct0 20000 mpls te tunnel-id 3 mpls te service-class default # bgp 100 peer 4.4.4.9 as-number 100 peer 4.4.4.9 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 4.4.4.9 enable # ipv4-family vpnv4 policy vpn-target peer 4.4.4.9 enable # ipv4-family vpn-instance vpn1 peer 10.10.1.2 as-number 65410 # tunnel-policy policy1 tunnel select-seq cr-lsp load-balance-number 3 # return
P1 configuration file
# sysname P1 # mpls lsr-id 2.2.2.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0002.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 isis enable 1 # return
P2 configuration file
# sysname P2 # mpls lsr-id 3.3.3.9 # mpls mpls te mpls rsvp-te # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0003.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 isis enable 1 # return
PE2 configuration file
# sysname PE2 # mpls lsr-id 4.4.4.9 # mpls mpls te mpls rsvp-te # ip vpn-instance vpn2 ipv4-family route-distinguisher 200:1 tnl-policy policy1 apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # isis 1 is-level level-2 cost-style wide traffic-eng level-2 network-entity 00.0005.0000.0000.0004.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 100000 mpls te bandwidth bc0 100000 isis enable 1 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip binding vpn-instance vpn2 ip address 10.11.1.1 255.255.255.0 # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 isis enable 1 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te bandwidth ct0 20000 mpls te tunnel-id 1 # bgp 100 peer 1.1.1.9 as-number 100 peer 1.1.1.9 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.9 enable # ipv4-family vpnv4 policy vpn-target peer 1.1.1.9 enable # ipv4-family vpn-instance vpn2 peer 10.11.1.2 as-number 65420 # tunnel-policy policy1 tunnel select-seq cr-lsp load-balance-number 3 # return
Example for Configuring CBTS in an L3VPN over LDP over TE Scenario
This section provides an example for configuring a CBTS in an L3VPN over LDP over TE scenario.
Networking Requirements
In Figure 1-2261, CE1 and CE2 belong to the same L3VPN. They access the public network through PE1 and PE2 respectively. Various types of services are transmitted between CE1 and CE2. Transmitting a large number of common services deteriorates the efficiency of transmitting important services. To prevent this problem, the CBTS function can be configured. A CBTS allows traffic of a specific service class to be transmitted along a specified tunnel.
In this example, tunnel 1 transmits important services, and tunnel 2 transmits other packets.
Configuration Notes
When configuring a TE tunnel group in an L3VPN over LDP over TE scenario, note that the destination IP address of a tunnel must be equal to the LSR ID of the egress.
Configuration Roadmap
The configuration roadmap is as follows:
Configure the IP address of a loopback interface as the LSR ID on each LSR and configure an IGP to advertise routes.
Configure OSPF TE on each TE-aware area, create an MPLS TE tunnel, and specify the service class for packets that can be transmitted on the tunnel.
Enable MPLS LDP in each non-TE-aware area and configure a remote LDP peer at the edge of the TE-aware area.
Configure the forwarding adjacency.
Configure multi-field traffic classification on nodes that connected to the L3VPN and configure behavior aggregate classification on LDP over TE links.
Data Preparation
To complete the configuration, you need the following data:
OSPF process ID and OSPF area ID
Policy for triggering the LSP establishment
Name and IP address of each remote LDP peer of P1 and P2
Link bandwidth attributes of the tunnel
Tunnel interface number, IP address, destination IP address, tunnel ID, tunnel signaling protocol (In this example, the default protocol of RSVP-TE is used), tunnel bandwidth, TE metric value, and link cost on each P.
Multi-field classifier name and traffic policy name
Procedure
- Assign an IP address to each interface.
Assign an IP address to each interface, including the loopback interface according to Figure 1-2261. For configuration details, see Configuration Files in this section.
- Enable OSPF to advertise the route of the segment connected to each interface and the host route destined for each LSR ID. For configuration details, see Configuration Files in this section.
- Configure an EBGP peer relationship between each pair of a PE and a CE and an MP-IBGP peer relationship between two PEs.
For configuration details, see Configuration Files in this section.
- Enable MPLS on each LSR. Enable LDP to establish an LDP session between PE1 and P1, and between P2 and PE2. Enable RSVP-TE to establish an RSVP neighbor relationship between P1 and P2, and between P1 and P3.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] lsp-trigger all
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 0/1/0
[*PE1-GigabitEthernet0/1/0] mpls
[*PE1-GigabitEthernet0/1/0] mpls ldp
[*PE1-GigabitEthernet0/1/0] commit
[~PE1-GigabitEthernet0/1/0] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.2
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] lsp-trigger all
[*P1-mpls] mpls rsvp-te
[*P1-mpls] mpls te cspf
[*P1-mpls] quit
[*P1] mpls ldp
[*P1-mpls-ldp] quit
[*P1] interface gigabitethernet 0/1/0
[*P1-GigabitEthernet0/1/0] mpls
[*P1-GigabitEthernet0/1/0] mpls ldp
[*P1-GigabitEthernet0/1/0] quit
[*P1] interface gigabitethernet 0/1/8
[*P1-GigabitEthernet0/1/8] mpls
[*P1-GigabitEthernet0/1/8] mpls te
[*P1-GigabitEthernet0/1/8] mpls rsvp-te
[*P1-GigabitEthernet0/1/8] commit
[~P1-GigabitEthernet0/1/8] quit
# Configure P3.
[~P3] mpls lsr-id 3.3.3.3
[*P3] mpls
[*P3-mpls] mpls te
[*P3-mpls] mpls rsvp-te
[*P3-mpls] quit
[*P3] interface gigabitethernet 0/1/0
[*P3-GigabitEthernet0/1/0] mpls
[*P3-GigabitEthernet0/1/0] mpls te
[*P3-GigabitEthernet0/1/0] mpls rsvp-te
[*P3-GigabitEthernet0/1/0] quit
[*P3] interface gigabitethernet 0/1/8
[*P3-GigabitEthernet0/1/8] mpls
[*P3-GigabitEthernet0/1/8] mpls te
[*P3-GigabitEthernet0/1/8] mpls rsvp-te
[*P3-GigabitEthernet0/1/8] commit
[~P3-GigabitEthernet0/1/8] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.4
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] lsp-trigger all
[*P2-mpls] mpls rsvp-te
[*P2-mpls] mpls te cspf
[*P2-mpls] quit
[*P2] mpls ldp
[*P2-mpls-ldp] quit
[*P2] interface gigabitethernet 0/1/0
[*P2-GigabitEthernet0/1/0] mpls
[*P2-GigabitEthernet0/1/0] mpls te
[*P2-GigabitEthernet0/1/0] mpls rsvp-te
[*P2-GigabitEthernet0/1/0] quit
[*P2] interface gigabitethernet 0/1/8
[*P2-GigabitEthernet0/1/8] mpls
[*P2-GigabitEthernet0/1/8] mpls ldp
[*P2-GigabitEthernet0/1/8] commit
[~P2-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] mpls lsr-id 5.5.5.5
[*PE2] mpls
[*PE2-mpls] lsp-trigger all
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 0/1/0
[*PE2-GigabitEthernet0/1/0] mpls
[*PE2-GigabitEthernet0/1/0] mpls ldp
[*PE2-GigabitEthernet0/1/0] commit
[~PE2-GigabitEthernet0/1/0] quit
After completing the preceding configurations, the local LDP sessions have been successfully established between PE1 and P1 and between P2 and PE2.
# Run the display mpls ldp session command on PE1, P1, P2, or PE2 to view information about the established LDP session.
[~PE1] display mpls ldp session
LDP Session(s) in Public Network Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM) A '*' before a session means the session is being deleted. -------------------------------------------------------------------------- PeerID Status LAM SsnRole SsnAge KASent/Rcv -------------------------------------------------------------------------- 2.2.2.2:0 Operational DU Passive 0000:00:05 23/23 -------------------------------------------------------------------------- TOTAL: 1 Session(s) Found.
# Run the display mpls ldp peer command on PE1 to view information about the established LDP peer.
[~PE1] display mpls ldp peer
LDP Peer Information in Public network A '*' before a peer means the peer is being deleted. ------------------------------------------------------------------------- PeerID TransportAddress DiscoverySource ------------------------------------------------------------------------- 2.2.2.2:0 2.2.2.2 GigabitEthernet0/1/0 ------------------------------------------------------------------------- TOTAL: 1 Peer(s) Found.
# Run the display mpls lsp command on PE1 to view information about LDP LSP information. The command output shows that an RSVP-TE tunnel is not established. The following example uses the command output on PE1.
[~PE1] display mpls lsp
----------------------------------------------------------------------
LSP Information: LDP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL GE0/1/0/-
2.2.2.2/32 NULL/3 -/GE0/1/0
2.2.2.2/32 1024/3 -/GE0/1/0
10.1.1.0/24 3/NUL GE0/1/0/-
10.2.1.0/24 NULL/3 -/GE0/1/0
10.2.1.0/24 1025/3 -/GE0/1/0
- Configure a remote LDP session between P1 and P2.
# Configure P1.
[~P1] mpls ldp remote-peer lsrd
[*P1-mpls-ldp-remote-lsrd] remote-ip 4.4.4.4
[*P1-mpls-ldp-remote-lsrd] commit
[~P1-mpls-ldp-remote-lsrd] quit
# Configure P2.
[~P2] mpls ldp remote-peer lsrb
[*P2-mpls-ldp-remote-lsrb] remote-ip 2.2.2.2
[*P2-mpls-ldp-remote-lsrb] commit
[~P2-mpls-ldp-remote-lsrb] quit
After completing the preceding configurations, a remote LDP session is set up between P1 and P2. Run the display mpls ldp remote-peer command on P1 or P2 to view information about the remote session entity. The following example uses the command output on P1.
[~P1] display mpls ldp remote-peer lsrd
LDP Remote Entity Information ------------------------------------------------------------------------------ Remote Peer Name: P2 Remote Peer IP : 4.4.4.4 LDP ID : 2.2.2.2:0 Transport Address : 2.2.2.2 Entity Status : Active Configured Keepalive Hold Timer : 45 Sec Configured Keepalive Send Timer : ---- Configured Hello Hold Timer : 45 Sec Negotiated Hello Hold Timer : 45 Sec Configured Hello Send Timer : ---- Configured Delay Timer : ---- Hello Packet sent/received : 425/382 ------------------------------------------------------------------------------ TOTAL: 1 Remote-Peer(s) Found.
- Configure bandwidth attributes on each outbound interface along the link of the TE tunnel.
# Configure P1.
[~P1] interface gigabitethernet 0/1/8
[~P1-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 20000
[*P1-GigabitEthernet0/1/8] mpls te bandwidth bc0 20000
[*P1-GigabitEthernet0/1/8] commit
[~P1-GigabitEthernet0/1/8] quit
# Configure P3.
[~P3] interface gigabitethernet 0/1/0
[~P3-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet0/1/0] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet0/1/0] quit
[*P3] interface gigabitethernet 0/1/8
[*P3-GigabitEthernet0/1/8] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet0/1/8] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet0/1/8] commit
[~P3-GigabitEthernet0/1/8] quit
# Configure P2.
[~P2] interface gigabitethernet 0/1/0
[~P2-GigabitEthernet0/1/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P2-GigabitEthernet0/1/0] mpls te bandwidth bc0 20000
[*P2-GigabitEthernet0/1/0] commit
[~P2-GigabitEthernet0/1/0] quit
- Configure L3VPN access on PE1 and PE2 and configure multi-field classification on the inbound interface of PE1.
# Configure PE1.
[~PE1] ip vpn-instance VPNA
[*PE1-vpn-instance-VPNA] ipv4-family
[*PE1-vpn-instance-VPNA-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-VPNA-af-ipv4] vpn-target 111:1 both
[*PE1] interface gigabitethernet 0/1/8
[*PE1-GigabitEthernet0/1/8] ip binding vpn-instance VPNA
[*PE1] acl 2001
[*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.255.255.255
[*PE1-acl4-basic-2001] quit
[*PE1] acl 2002
[*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.255.255.255
[*PE1-acl4-basic-2002] quit
[*PE1] traffic classifier service1
[*PE1-classifier-service1] if-match acl 2001
[*PE1-classifier-service1] commit
[~PE1-classifier-service1] quit
[~PE1] traffic behavior behavior1
[*PE1-behavior-behavior1] service-class af1 color green
[*PE1-behavior-behavior1] commit
[~PE1-behavior-behavior1] quit
[~PE1] traffic classifier service2
[*PE1-classifier-service2] if-match acl 2002
[*PE1-classifier-service2] commit
[~PE1-classifier-service2] quit
[~PE1] traffic behavior behavior2
[*PE1-behavior-behavior2] service-class af2 color green
[*PE1-behavior-behavior2] commit
[~PE1-behavior-behavior2] quit
[~PE1] traffic policy test
[*PE1-trafficpolicy-test] classifier service1 behavior behavior1
[*PE1-trafficpolicy-test] classifier service2 behavior behavior2
[*PE1-trafficpolicy-test] commit
[~PE1-trafficpolicy-test] quit
[~PE1] interface gigabitethernet 0/1/8
[~PE1-GigabitEthernet0/1/8] traffic-policy test inbound
[~PE1-GigabitEthernet0/1/8] commit
[~PE1-GigabitEthernet0/1/8] quit
# Configure PE2.
[~PE2] ip vpn-instance VPNB
[*PE2-vpn-instance-VPNB] ipv4-family
[*PE2-vpn-instance-VPNB-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-VPNB-af-ipv4] vpn-target 111:1 both
[*PE2] interface gigabitethernet 0/1/8
[*PE2-GigabitEthernet0/1/8] ip binding vpn-instance VPNB
[*PE2-GigabitEthernet0/1/8] commit
[~PE2-GigabitEthernet0/1/8] quit
- Configure behavior aggregate classification on interfaces connecting PE1 to P1.
# Configure PE1.
[~PE1] interface gigabitethernet 0/1/0
[~PE1-GigabitEthernet0/1/0] trust upstream default
[*PE1-GigabitEthernet0/1/0] commit
[~PE1-GigabitEthernet0/1/0] quit
# Configure P1.
[~P1] interface gigabitethernet 0/1/0
[~P1-GigabitEthernet0/1/0] trust upstream default
[*PE1-GigabitEthernet0/1/0] commit
[~PE1-GigabitEthernet0/1/0] quit
- Configure a TE tunnel that originates from P1 and is destined for P2 and set the service class for each type of packets that can pass through the tunnel.
Run the mpls te service-class { service-class & <1-8> | default } command to configure the service class for packets transmitted along the tunnel.
# On P1, enable the IGP shortcut function on the tunnel interface and adjust the metric value to ensure that traffic destined for P2 or PE2 passes through the tunnel.
[~P1] interface tunnel1
[*P1-Tunnel1] ip address unnumbered interface LoopBack1
[*P1-Tunnel1] tunnel-protocol mpls te
[*P1-Tunnel1] destination 4.4.4.4
[*P1-Tunnel1] mpls te tunnel-id 100
[*P1-Tunnel1] mpls te bandwidth ct0 10000
[*P1-Tunnel1] mpls te igp shortcut
[*P1-Tunnel1] mpls te igp metric absolute 1
[*P1-Tunnel1] mpls te service-class af1 af2
[*P1-Tunnel1] quit
[*P1] interface tunnel12
[*P1-Tunnel2] ip address unnumbered interface LoopBack1
[*P1-Tunnel2] tunnel-protocol mpls te
[*P1-Tunnel2] destination 4.4.4.4
[*P1-Tunnel2] mpls te tunnel-id 200
[*P1-Tunnel2] mpls te bandwidth ct0 10000
[*P1-Tunnel2] mpls te igp shortcut
[*P1-Tunnel2] mpls te igp metric absolute 1
[*P1-Tunnel2] mpls te service-class default
[*P1-Tunnel2] quit
[*P1] ospf 1
[*P1-ospf-1] area 0
[*P1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] enable traffic-adjustment advertise
[*P1-ospf-1] commit
- Configure a tunnel that originates from P2 and is destined for P1.
# On P2, enable the forwarding adjacency on the tunnel interface and adjust the metric value of the forwarding adjacency to ensure that traffic destined for PE1 or P1 passes through the tunnel.
[~P2] interface tunnel1
[*P2-Tunnel1] ip address unnumbered interface LoopBack1
[*P2-Tunnel1] tunnel-protocol mpls te
[*P2-Tunnel1] destination 2.2.2.2
[*P2-Tunnel1] mpls te tunnel-id 101
[*P2-Tunnel1] mpls te bandwidth ct0 10000
[*P2-Tunnel1] mpls te igp shortcut
[*P2-Tunnel1] mpls te igp metric absolute 1
[*P2-Tunnel1] quit
[*P2] ospf 1
[*P2-ospf-1] area 0
[*P2-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] enable traffic-adjustment advertise
[*P2-ospf-1] quit
[*P2] commit
Configuration Files
PE1 configuration file
# sysname PE1 # ip vpn-instance VPNA ipv4-family route-distinguisher 100:1 apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # mpls lsr-id 1.1.1.1 # mpls lsp-trigger all # mpls ldp # acl number 2001 rule 10 permit source 10.40.0.0 0.255.255.255 # acl number 2002 rule 20 permit source 10.50.0.0 0.255.255.255 # traffic classifier service1 if-match acl 2001 # traffic classifier service2 if-match acl 2002 # traffic behavior behavior1 service-class af1 color green # traffic behavior behavior2 service-class af2 color green # traffic policy test share-mode classifier service1 behavior behavior1 classifier service2 behavior behavior2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.1 255.255.255.0 mpls mpls ldp trust upstream default # interface GigabitEthernet0/1/8 undo shutdown ip binding vpn-instance VPNA ip address 10.10.1.1 255.255.255.0 traffic-policy test inbound # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # bgp 100 peer 5.5.5.5 as-number 100 peer 5.5.5.5 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 5.5.5.5 enable # ipv4-family vpnv4 policy vpn-target peer 5.5.5.5 enable # ipv4-family vpn-instance VPNA peer 10.10.1.2 as-number 65410 # ospf 1 area 0.0.0.0 network 1.1.1.1 0.0.0.0 network 10.1.1.0 0.0.0.255 # return
P1 configuration file
# sysname P1 # mpls lsr-id 2.2.2.2 # mpls mpls te mpls rsvp-te mpls te cspf lsp-trigger all # mpls ldp # ipv4-family # mpls ldp remote-peer lsrd remote-ip 4.4.4.4 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.1.1.2 255.255.255.0 mpls mpls ldp trust upstream default # interface GigabitEthernet0/1/8 undo shutdown ip address 10.2.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 20000 mpls te bandwidth bc0 20000 mpls rsvp-te # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.4 mpls te tunnel-id 100 mpls te bandwidth ct0 10000 mpls te igp shortcut mpls te igp metric absolute 1 mpls te service-class af1 af2 # interface Tunnel2 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.4 mpls te tunnel-id 200 mpls te bandwidth ct0 10000 mpls te igp shortcut mpls te igp metric absolute 1 mpls te service-class default # ospf 1 opaque-capability enable enable traffic-adjustment advertise area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 10.1.1.0 0.0.0.255 network 10.2.1.0 0.0.0.255 mpls-te enable # return
P3 configuration file
# sysname P3 # mpls lsr-id 3.3.3.3 # mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/0 undo shutdown ip address 10.2.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 20000 mpls te bandwidth bc0 20000 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.3.1.1 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 20000 mpls te bandwidth bc0 20000 mpls rsvp-te # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 10.2.1.0 0.0.0.255 network 10.3.1.0 0.0.0.255 mpls-te enable # return
P2 configuration file
# sysname P2 # mpls lsr-id 4.4.4.4 # mpls mpls te mpls rsvp-te mpls te cspf lsp-trigger all # mpls ldp # ipv4-family # mpls ldp remote-peer lsrb remote-ip 2.2.2.2 # interface GigabitEthernet0/1/0 undo shutdown ip address 10.3.1.2 255.255.255.0 mpls mpls te mpls te bandwidth max-reservable-bandwidth 20000 mpls te bandwidth bc0 20000 mpls rsvp-te # interface GigabitEthernet0/1/8 undo shutdown ip address 10.4.1.2 255.255.255.0 mpls mpls ldp # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface Tunnel1 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 2.2.2.2 mpls te tunnel-id 101 mpls te bandwidth ct0 10000 mpls te igp shortcut mpls te igp metric absolute 1 # ospf 1 opaque-capability enable enable traffic-adjustment advertise area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 10.3.1.0 0.0.0.255 network 10.4.1.0 0.0.0.255 mpls-te enable # return
PE2 configuration file
# sysname PE2 # ip vpn-instance VPNB ipv4-family route-distinguisher 200:1 apply-label per-instance vpn-target 111:1 export-extcommunity vpn-target 111:1 import-extcommunity # mpls lsr-id 5.5.5.5 # mpls lsp-trigger all # mpls ldp # interface GigabitEthernet0/1/0 undo shutdown ip address 10.4.1.1 255.255.255.0 mpls mpls ldp # interface GigabitEthernet0/1/8 undo shutdown ip binding vpn-instance VPNB ip address 10.11.1.1 255.255.255.0 # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable # ipv4-family vpnv4 policy vpn-target peer 1.1.1.1 enable # ipv4-family vpn-instance VPNB peer 10.11.1.2 as-number 65420 # ospf 1 area 0.0.0.0 network 5.5.5.5 0.0.0.0 network 10.4.1.0 0.0.0.255 # return
Example for Configuring CBTS in a VLL over TE Scenario
Networking Requirements
In Figure 1-2262, CE1 and CE2 belong to the same VLL network. They access the MPLS backbone network through PE1 and PE2, respectively. OSPF is used as an IGP on the MPLS backbone network.
Configure an LDP VLL and use the dynamic signaling protocol RSVP-TE to establish two MPLS TE tunnels between PE1 and PE2 to transmit VLL services. Assign each TE tunnel a specific priority. Enable behavior aggregate classification on the interfaces that receive VLL packets to trust 802.1p priority values so that they can forward VLL packets with a specific priority to a specific tunnel.
Establish TE1 tunnel with ID 100 over the path PE1 –> P1 –> PE2 and TE2 tunnel with ID 200 over the path PE1 –> P2 –> PE2. Configure AF1 on TE1 interface and AF2 on TE2 interface. This configuration allows PE1 to forward traffic with service class AF1 along TE1 tunnel and traffic with service class AF2 along TE2 tunnel. The two tunnels can load-balance traffic based on priority values. The requirements of PE2 are similar to those of PE1.
Note that if multiple tunnels with AF1 are established between PE1 and PE2, packets mapped to AF1 are load-balanced along these tunnels.
When CBTS is configured, do not configure the following services:
- Dynamic load balancing
Configuration Roadmap
The configuration roadmap is as follows:
Enable a routing protocol on the MPLS backbone network devices (PEs and Ps) for them to communicate with each other and enable MPLS.
Establish MPLS TE tunnels and configure a tunnel policy. For details about how to configure an MPLS TE tunnel, see "MPLS TE Configuration" in HUAWEI NetEngine 8000 F1A series Router Configuration Guide - MPLS.
Enable MPLS Layer 2 virtual private network (L2VPN) on the PEs.
Create a VLL, configure LDP as a signaling protocol, and bind the VLL to an AC interface on each PE.
Configure MPLS TE tunnels to transmit VLL packets.
Data Preparation
To complete the configuration, you need the following data:
OSPF area enabled with TE
VLL name and VLL ID
IP addresses of peers and tunnel policy
Name of AC interfaces bound to a VLL
Interface number and IP address of each tunnel interface, as well as destination IP address, tunnel ID, tunnel signaling protocol (RSVP-TE), and tunnel bandwidth to be specified on each tunnel interface
Procedure
- Assign an IP address to each interface.
Configure the IP address and mask for each interface according to Figure 1-2262.
- Enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS CSPF.
On the nodes along each MPLS TE tunnel, enable MPLS, MPLS TE, and MPLS RSVP-TE both in the system view and the interface view. On the ingress node of each tunnel, enable MPLS CSPF in the system view.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] mpls te cspf
[*PE1-mpls] quit
[*PE1] interface gigabitethernet0/1/9
[*PE1-GigabitEthernet0/1/9] mpls
[*PE1-GigabitEthernet0/1/9] mpls te
[*PE1-GigabitEthernet0/1/9] mpls rsvp-te
[*PE1-GigabitEthernet0/1/9] quit
[*PE1] commit
[*PE1] interface gigabitethernet0/1/11
[*PE1-GigabitEthernet0/1/11] mpls
[*PE1-GigabitEthernet0/1/11] mpls te
[*PE1-GigabitEthernet0/1/11] mpls rsvp-te
[*PE1-GigabitEthernet0/1/11] quit
[*PE1] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet0/1/17
[*P1-GigabitEthernet0/1/17] mpls
[*P1-GigabitEthernet0/1/17] mpls te
[*P1-GigabitEthernet0/1/17] mpls rsvp-te
[*P1-GigabitEthernet0/1/17] quit
[*P1] interface gigabitethernet0/1/25
[*P1-GigabitEthernet0/1/25] mpls
[*P1-GigabitEthernet0/1/25] mpls te
[*P1-GigabitEthernet0/1/25] mpls rsvp-te
[*P1-GigabitEthernet0/1/25] quit
[*P1] commit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet0/1/24
[*P2-GigabitEthernet0/1/24] mpls
[*P2-GigabitEthernet0/1/24] mpls te
[*P2-GigabitEthernet0/1/24] mpls rsvp-te
[*P2-GigabitEthernet0/1/24] quit
[*P2] interface gigabitethernet0/1/19
[*P2-GigabitEthernet0/1/19] mpls
[*P2-GigabitEthernet0/1/19] mpls te
[*P2-GigabitEthernet0/1/19] mpls rsvp-te
[*P2-GigabitEthernet0/1/19] quit
[*P2] commit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] mpls te cspf
[*PE2-mpls] quit
[*PE2] interface gigabitethernet0/1/33
[*PE2-GigabitEthernet0/1/33] mpls
[*PE2-GigabitEthernet0/1/33] mpls te
[*PE2-GigabitEthernet0/1/33] mpls rsvp-te
[*PE2-GigabitEthernet0/1/33] quit
[*PE2] commit
[*PE2] interface gigabitethernet0/1/32
[*PE2-GigabitEthernet0/1/32] mpls
[*PE2-GigabitEthernet0/1/32] mpls te
[*PE2-GigabitEthernet0/1/32] mpls rsvp-te
[*PE2-GigabitEthernet0/1/32] quit
[*PE2] commit
- Enable OSPF and OSPF TE on the MPLS backbone network.
# Configure PE1.
[~PE1] ospf
[*PE1-ospf-1] opaque-capability enable
[*PE1-ospf-1] area 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] mpls-te enable
[*PE1-ospf-1-area-0.0.0.0] quit
[*PE1-ospf-1] quit
[*PE1] commit
# Configure P1.
[~P1] ospf
[*P1-ospf-1] opaque-capability enable
[*P1-ospf-1] area 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] mpls-te enable
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] quit
[*P1] commit
# Configure P2.
[~P2] ospf
[*P2-ospf-1] opaque-capability enable
[*P2-ospf-1] area 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] mpls-te enable
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] quit
[*P2] commit
# Configure PE2.
[~PE2] ospf
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 4.4.4.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
- Configure tunnel interfaces.
# Create tunnel interfaces on PEs, configure MPLS TE as a tunneling protocol and RSVP-TE as a signaling protocol, and specify priorities.
# Configure PE1.
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] ip address unnumbered interface loopback1
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 4.4.4.9
[*PE1-Tunnel10] mpls te tunnel-id 100
[*PE1-Tunnel10] mpls te service-class af1
[*PE1-Tunnel10] quit
[*PE1] commit
[~PE1] interface Tunnel 11
[*PE1-Tunnel11] ip address unnumbered interface loopback1
[*PE1-Tunnel11] tunnel-protocol mpls te
[*PE1-Tunnel11] destination 4.4.4.9
[*PE1-Tunnel11] mpls te tunnel-id 200
[*PE1-Tunnel11] mpls te service-class af2
[*PE1-Tunnel11] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] ip address unnumbered interface loopback1
[*PE2-Tunnel10] tunnel-protocol mpls te
[*PE2-Tunnel10] destination 1.1.1.9
[*PE2-Tunnel10] mpls te tunnel-id 100
[*PE2-Tunnel10] mpls te service-class af1
[*PE2-Tunnel10] quit
[*PE2] commit
[~PE2] interface Tunnel 11
[*PE2-Tunnel11] ip address unnumbered interface loopback1
[*PE2-Tunnel11] tunnel-protocol mpls te
[*PE2-Tunnel11] destination 1.1.1.9
[*PE2-Tunnel11] mpls te tunnel-id 200
[*PE2-Tunnel11] mpls te service-class af2
[*PE2-Tunnel11] quit
[*PE2] commit
After completing the preceding configurations, run the display this interface command in the tunnel interface view. The command output shows that the Line protocol current state field is UP, indicating that an MPLS TE tunnel has been successfully established.
Run the display tunnel-info all command on PE1. The command output shows that two TE tunnels destined for PE2 with the LSR ID of 4.4.4.9 have been established. The command output on PE2 is similar to that on PE1.
<PE1> display tunnel-info all
* -> Allocated VC Token
Tunnel ID Type Destination Status
----------------------------------------------------------------------
0xc2060404 te 4.4.4.9 UP
0xc2060405 te 4.4.4.9 UP
- Configure MPLS TE explicit paths.
Specify an explicit path for each tunnel.
# Configure PE1. Specify a physical interface on the P as the first next hop and a physical interface on PE2 as the second next hop to ensure that the two tunnels are built over different links.
[~PE1] explicit-path t1
[*PE1-explicit-path-t1] next hop 10.1.2.2
[*PE1-explicit-path-t1] next hop 10.1.4.2
[*PE1-explicit-path-t1] quit
[*PE1] commit
[~PE1] explicit-path t2
[*PE1-explicit-path-t2] next hop 10.1.3.2
[*PE1-explicit-path-t2] next hop 10.1.5.2
[*PE1-explicit-path-t2] quit
[*PE1] commit
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] mpls te path explicit-path t1
[*PE1-Tunnel10] quit
[*PE1] commit
[~PE1] interface Tunnel 11
[*PE1-Tunnel11] mpls te path explicit-path t2
[*PE1-Tunnel11] quit
[*PE1] commit
# Configure PE2. Specify a physical interface on the P as the first next hop and a physical interface on PE1 as the second next hop to ensure that the two tunnels are built over different links.
[~PE2] explicit-path t1
[*PE2-explicit-path-t1] next hop 10.1.4.1
[*PE2-explicit-path-t1] next hop 10.1.2.1
[*PE2-explicit-path-t1] quit
[*PE2] commit
[~PE2] explicit-path t2
[*PE2-explicit-path-t2] next hop 10.1.5.1
[*PE2-explicit-path-t2] next hop 10.1.3.1
[*PE2-explicit-path-t2] quit
[*PE2] commit
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] mpls te path explicit-path t1
[*PE2-Tunnel10] quit
[*PE2] commit
[~PE2] interface Tunnel 11
[*PE2-Tunnel11] mpls te path explicit-path t2
[*PE2-Tunnel11] quit
[*PE2] commit
- Configure a remote LDP session.
Establish a remote LDP session between PE1 and PE2.
# Configure PE1.
[~PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] mpls ldp remote-peer DTB1
[*PE1-mpls-ldp-remote-DTB] remote-ip 4.4.4.9
[*PE1] commit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] mpls ldp remote-peer DTB2
[*PE2-mpls-ldp-remote-DTB2] remote-ip 1.1.1.9
[*PE2] commit
After completing the preceding configurations, run the display mpls ldp peer command. A remote LDP session has been established between the two PEs.
The following example uses the command output on PE1.
<PE1> display mpls ldp peer
LDP Peer Information in Public network A '*' before a peer means the peer is being deleted. ------------------------------------------------------------------------------ PeerID TransportAddress DiscoverySource ------------------------------------------------------------------------------ 4.4.4.9:0 4.4.4.9 Remote Peer : DTB1 ------------------------------------------------------------------------------ TOTAL: 1 Peer(s) Found.
- Configure a tunnel policy.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
- Enable MPLS L2VPN on PEs.
# Configure PE1.
[~PE1] mpls l2vpn
[*PE1-l2vpn] quit
[*PE1] commit
# Configure PE2.
[~PE2] mpls l2vpn
[*PE2-l2vpn] quit
[*PE2] commit
- Create a VLL on PEs and bind it to the tunnel policy.
# Configure PE1.
[~PE1] interface gigabitethernet0/1/10.1
[*PE1-GigabitEthernet0/1/10.1] vlan-type dot1q 10
[*PE1-GigabitEthernet0/1/10.1] mpls l2vc 4.4.4.9 1 tunnel-policy p1
[*PE1-GigabitEthernet0/1/10.1] trust upstream default
[*PE1-GigabitEthernet0/1/10.1] trust 8021p
[*PE1-GigabitEthernet0/1/10.1] undo shutdown
[*PE1-GigabitEthernet0/1/10.1] commit
# Configure PE2.
[~PE2] interface gigabitethernet0/1/10.1
[*PE2-GigabitEthernet0/1/10.1] vlan-type dot1q 10
[*PE2-GigabitEthernet0/1/10.1] mpls l2vc 1.1.1.9 1 tunnel-policy p1
[*PE2-GigabitEthernet0/1/10.1] trust upstream default
[*PE2-GigabitEthernet0/1/10.1] trust 8021p
[*PE2-GigabitEthernet0/1/10.1] undo shutdown
[*PE2-GigabitEthernet0/1/10.1] quit
# Configure CE1.
[~CE1] interface gigabitethernet0/1/0.1
[*CE1-GigabitEthernet0/1/0.1] shutdown
[*CE1-GigabitEthernet0/1/0.1] vlan-type dot1q 10
[*CE1-GigabitEthernet0/1/0.1] ip address 10.1.1.1 255.255.255.0
[*CE1-GigabitEthernet0/1/0.1] undo shutdown
[*CE1-GigabitEthernet0/1/0.1] quit
# Configure CE2.
[~CE2] interface gigabitethernet0/1/0.1
[*CE2-GigabitEthernet0/1/0.1] shutdown
[*CE2-GigabitEthernet0/1/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet0/1/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet0/1/0.1] undo shutdown
[*CE2-GigabitEthernet0/1/0.1] quit
- Verify the configuration.
After completing the preceding configurations, run display mpls l2vc command on PE1.
The command output shows that the values in the AC Status and VC State fields are up, and there are two tunnel IDs. Two tunnels have been established between PE1 and PE2.
<PE1> display mpls l2vc interface GigabitEthernet0/1/10.1
*client interface : GigabitEthernet0/1/10.1 is up
Administrator PW : no
session state : up
AC status : up
VC state : up
Label state : 0
Token state : 0
VC ID : 1
VC type : VLAN
destination : 4.4.4.9
local group ID : 0 remote group ID : 0
local VC label : 32768 remote VC label : 32768
local AC OAM State : up
local PSN OAM State : up
local forwarding state : forwarding
local status code : 0x0 (forwarding)
remote AC OAM state : up
remote PSN OAM state : up
remote forwarding state: forwarding
remote status code : 0x0 (forwarding)
ignore standby state : no
BFD for PW : unavailable
VCCV State : up
manual fault : not set
active state : active
forwarding entry : exist
OAM Protocol : --
OAM Status : --
OAM Fault Type : --
TTL Value : --
link state : up
local VC MTU : 1500 remote VC MTU : 1500
local VCCV : alert ttl lsp-ping bfd
remote VCCV : alert ttl lsp-ping bfd
local control word : disable remote control word : disable
tunnel policy name : p1
PW template name : --
primary or secondary : primary
load balance type : flow
Access-port : false
Switchover Flag : false
VC tunnel info : 2 tunnels
NO.0 TNL type : te , TNL ID : 0x00000000030000000a
NO.1 TNL type : te , TNL ID : 0x000000000300000003
create time : 0 days, 0 hours, 9 minutes, 58 seconds
up time : 0 days, 0 hours, 7 minutes, 41 seconds
last change time : 0 days, 0 hours, 7 minutes, 41 seconds
VC last up time : 2014/05/23 10:13:29
VC total up time : 0 days, 0 hours, 7 minutes, 41 seconds
CKey : 1
NKey : 989855833
PW redundancy mode : frr
AdminPw interface : --
AdminPw link state : --
Diffserv Mode : uniform
Service Class : --
Color : --
DomainId : --
Domain Name : --
Configuration Files
PE1 configuration file
# sysname PE1 # mpls lsr-id 1.1.1.9 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path t1 next hop 10.1.2.2 next hop 10.1.4.2 # explicit-path t2 next hop 10.1.3.2 next hop 10.1.5.2 # mpls l2vpn # mpls ldp # ipv4-family # mpls ldp remote-peer DTB1 remote-ip 4.4.4.9 # interface GigabitEthernet0/1/10.1 undo shutdown vlan-type dot1q 10 mpls l2vc 4.4.4.9 1 tunnel-policy p1 trust upstream default trust 8021p # interface GigabitEthernet0/1/9 undo shutdown ip address 10.1.2.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/11 undo shutdown ip address 10.1.3.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 1.1.1.9 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te path explicit-path t1 mpls te tunnel-id 100 mpls te service-class af1 # interface Tunnel11 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te path explicit-path t2 mpls te tunnel-id 200 mpls te service-class af2 # ospf 1 opaque-capability enable area 0.0.0.0 network 1.1.1.9 0.0.0.0 network 10.1.2.0 0.0.0.255 network 10.1.3.0 0.0.0.255 mpls-te enable # tunnel-policy p1 tunnel select-seq cr-lsp load-balance-number 2 # return
P1 configuration file
# sysname P1 # mpls lsr-id 2.2.2.9 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/17 undo shutdown ip address 10.1.2.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/25 undo shutdown ip address 10.1.4.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 2.2.2.9 0.0.0.0 network 10.1.2.0 0.0.0.255 network 10.1.4.0 0.0.0.255 mpls-te enable # return
P2 configuration file
# sysname P2 # mpls lsr-id 3.3.3.9 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/19 undo shutdown ip address 10.1.3.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/24 undo shutdown ip address 10.1.5.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 3.3.3.9 0.0.0.0 network 10.1.3.0 0.0.0.255 network 10.1.5.0 0.0.0.255 mpls-te enable # return
PE2 configuration file
# sysname PE2 # mpls lsr-id 4.4.4.9 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path t1 next hop 10.1.4.1 next hop 10.1.2.1 # explicit-path t2 next hop 10.1.5.1 next hop 10.1.3.1 # mpls l2vpn # mpls ldp # ipv4-family # mpls ldp remote-peer DTB2 remote-ip 1.1.1.9 # interface GigabitEthernet0/1/10.1 undo shutdown vlan-type dot1q 10 mpls l2vc 1.1.1.9 1 tunnel-policy p1 trust upstream default trust 8021p # interface GigabitEthernet0/1/33 undo shutdown ip address 10.1.4.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/32 undo shutdown ip address 10.1.5.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te path explicit-path t1 mpls te tunnel-id 100 mpls te service-class af1 # interface Tunnel11 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te path explicit-path t2 mpls te tunnel-id 200 mpls te service-class af2 # ospf 1 opaque-capability enable area 0.0.0.0 network 4.4.4.9 0.0.0.0 network 10.1.4.0 0.0.0.255 network 10.1.5.0 0.0.0.255 mpls-te enable # tunnel-policy p1 tunnel select-seq cr-lsp load-balance-number 2 # return
CE1 configuration file
# sysname CE1 # interface GigabitEthernet0/1/0.1 undo shutdown vlan-type dot1q 10 ip address 10.1.1.1 255.255.255.0 # return
CE2 configuration file
# sysname CE2 # interface GigabitEthernet0/1/0.1 undo shutdown vlan-type dot1q 10 ip address 10.1.1.2 255.255.255.0 # return
Example for Configuring CBTS in a VPLS over TE Scenario
Networking Requirements
In Figure 1-2263, CE1 and CE2 belong to the same VPLS network. They access the MPLS backbone network through PE1 and PE2, respectively. OSPF is used as an IGP on the MPLS backbone network.
It is required that an LDP VPLS tunnel and the dynamic signaling protocol RSVP-TE be used to establish two MPLS TE tunnels between PE1 and PE2 to transmit VPLS services. Each TE tunnel is assigned a specific priority. Interfaces that receive VPLS packets have behavior aggregate classification enabled and trust 802.1p priority values so that they can forward VPLS packets with a specific priority to a specific tunnel.
TE1 tunnel with ID 100 is established over the path PE1 –> P1 –> PE2, and TE2 tunnel with ID 200 is established over the path PE1 –> P2 –> PE2. AF1 is configured for the TE1 tunnel, and AF2 is configured for the TE2 tunnel. This configuration allows PE1 to forward traffic with service class AF1 along the TE1 tunnel and traffic with service class AF2 along the TE2 tunnel. The two tunnels can load-balance traffic based on priority values. The requirements of PE2 are similar to the requirements of PE1.
Note that if multiple tunnels with AF1 are established between PE1 and PE2, packets mapped to AF1 are load-balanced along these tunnels.
When CBTS is configured, do not configure the following services:
- Dynamic load balancing
Configuration Roadmap
The configuration roadmap is as follows:
Enable a routing protocol on the MPLS backbone network devices (PEs and Ps) for them to communicate with each other and enable MPLS.
Establish MPLS TE tunnels and configure a tunnel policy. For details about how to configure an MPLS TE tunnel, see "MPLS TE Configuration" in HUAWEI NetEngine 8000 F1A series Router Configuration Guide - MPLS.
Enable MPLS Layer 2 virtual private network (L2VPN) on the PEs.
Create a virtual switching instance (VSI), configure LDP as a signaling protocol, and bind the VSI to an AC interface on each PE.
Configure MPLS TE tunnels to transmit VSI packets.
Data Preparation
To complete the configuration, you need the following data:
OSPF area enabled with TE
VSI name and VSI ID
IP addresses of peers and tunnel policy
Names of AC interfaces bound to the VSI
Interface number and IP address of each tunnel interface, as well as destination IP address, tunnel ID, tunnel signaling protocol (RSVP-TE), and tunnel bandwidth to be specified on each tunnel interface
Procedure
- Assign an IP address to each interface on the backbone network. For configuration details, see Configuration Files in this section.
- Enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS CSPF.
On the nodes along each MPLS TE tunnel, enable MPLS, MPLS TE, and MPLS RSVP-TE both in the system view and the interface view. On the ingress node of each tunnel, enable MPLS CSPF in the system view.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] mpls te cspf
[*PE1-mpls] quit
[*PE1] interface gigabitethernet0/1/9
[*PE1-GigabitEthernet0/1/9] mpls
[*PE1-GigabitEthernet0/1/9] mpls te
[*PE1-GigabitEthernet0/1/9] mpls rsvp-te
[*PE1-GigabitEthernet0/1/9] quit
[*PE1] commit
[*PE1] interface gigabitethernet0/1/11
[*PE1-GigabitEthernet0/1/11] mpls
[*PE1-GigabitEthernet0/1/11] mpls te
[*PE1-GigabitEthernet0/1/11] mpls rsvp-te
[*PE1-GigabitEthernet0/1/11] quit
[*PE1] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet0/1/17
[*P1-GigabitEthernet0/1/17] mpls
[*P1-GigabitEthernet0/1/17] mpls te
[*P1-GigabitEthernet0/1/17] mpls rsvp-te
[*P1-GigabitEthernet0/1/17] quit
[*P1] interface gigabitethernet0/1/25
[*P1-GigabitEthernet0/1/25] mpls
[*P1-GigabitEthernet0/1/25] mpls te
[*P1-GigabitEthernet0/1/25] mpls rsvp-te
[*P1-GigabitEthernet0/1/25] quit
[*P1] commit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet0/1/24
[*P2-GigabitEthernet0/1/24] mpls
[*P2-GigabitEthernet0/1/24] mpls te
[*P2-GigabitEthernet0/1/24] mpls rsvp-te
[*P2-GigabitEthernet0/1/24] quit
[*P2] interface gigabitethernet0/1/19
[*P2-GigabitEthernet0/1/19] mpls
[*P2-GigabitEthernet0/1/19] mpls te
[*P2-GigabitEthernet0/1/19] mpls rsvp-te
[*P2-GigabitEthernet0/1/19] quit
[*P2] commit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] mpls te cspf
[*PE2-mpls] quit
[*PE2] interface gigabitethernet0/1/33
[*PE2-GigabitEthernet0/1/33] mpls
[*PE2-GigabitEthernet0/1/33] mpls te
[*PE2-GigabitEthernet0/1/33] mpls rsvp-te
[*PE2-GigabitEthernet0/1/33] quit
[*PE2] commit
[*PE2] interface gigabitethernet0/1/32
[*PE2-GigabitEthernet0/1/32] mpls
[*PE2-GigabitEthernet0/1/32] mpls te
[*PE2-GigabitEthernet0/1/32] mpls rsvp-te
[*PE2-GigabitEthernet0/1/32] quit
[*PE2] commit
- Enable OSPF and OSPF TE on the MPLS backbone network.
# Configure PE1.
[~PE1] ospf
[*PE1-ospf-1] opaque-capability enable
[*PE1-ospf-1] area 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] mpls-te enable
[*PE1-ospf-1-area-0.0.0.0] quit
[*PE1-ospf-1] quit
[*PE1] commit
# Configure P1.
[~P1] ospf
[*P1-ospf-1] opaque-capability enable
[*P1-ospf-1] area 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] mpls-te enable
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] quit
[*P1] commit
# Configure P2.
[~P2] ospf
[*P2-ospf-1] opaque-capability enable
[*P2-ospf-1] area 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] mpls-te enable
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] quit
[*P2] commit
# Configure PE2.
[~PE2] ospf
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 4.4.4.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
- Configure tunnel interfaces.
# Create tunnel interfaces on PEs, configure MPLS TE as a tunneling protocol and RSVP-TE as a signaling protocol, and specify priorities.
# Configure PE1.
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] ip address unnumbered interface loopback1
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 4.4.4.9
[*PE1-Tunnel10] mpls te tunnel-id 100
[*PE1-Tunnel10] mpls te service-class af1
[*PE1-Tunnel10] quit
[*PE1] commit
[~PE1] interface Tunnel 20
[*PE1-Tunnel20] ip address unnumbered interface loopback1
[*PE1-Tunnel20] tunnel-protocol mpls te
[*PE1-Tunnel20] destination 4.4.4.9
[*PE1-Tunnel20] mpls te tunnel-id 200
[*PE1-Tunnel20] mpls te service-class af2
[*PE1-Tunnel20] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] ip address unnumbered interface loopback1
[*PE2-Tunnel10] tunnel-protocol mpls te
[*PE2-Tunnel10] destination 1.1.1.9
[*PE2-Tunnel10] mpls te tunnel-id 100
[*PE2-Tunnel10] mpls te service-class af1
[*PE2-Tunnel10] quit
[*PE2] commit
[~PE2] interface Tunnel 20
[*PE2-Tunnel20] ip address unnumbered interface loopback1
[*PE2-Tunnel20] tunnel-protocol mpls te
[*PE2-Tunnel20] destination 1.1.1.9
[*PE2-Tunnel20] mpls te tunnel-id 200
[*PE2-Tunnel20] mpls te service-class af2
[*PE2-Tunnel20] quit
[*PE2] commit
After completing the preceding configurations, run the display this interface command in the tunnel interface view. The command output shows that Line protocol current state is UP, indicating that an MPLS TE tunnel has been established.
Run the display tunnel-info all command on PE1. The command output shows that two TE tunnels destined for PE2 with the LSR ID of 4.4.4.9 have been established. The command output on PE2 is similar to that on PE1.
<PE1> display tunnel-info all
* -> Allocated VC Token
Tunnel ID Type Destination Status
----------------------------------------------------------------------
0xc2060404 te 4.4.4.9 UP
0xc2060405 te 4.4.4.9 UP
- Configure MPLS TE explicit paths.
Specify an explicit path for each tunnel.
# Configure PE1. Specify a physical interface on the P as the first next hop and a physical interface on PE2 as the second next hop to ensure that the two tunnels are built over different links.
[~PE1] explicit-path t1
[*PE1-explicit-path-t1] next hop 10.1.2.2
[*PE1-explicit-path-t1] next hop 10.1.4.2
[*PE1-explicit-path-t1] quit
[*PE1] commit
[~PE1] explicit-path t2
[*PE1-explicit-path-t2] next hop 10.1.3.2
[*PE1-explicit-path-t2] next hop 10.1.5.2
[*PE1-explicit-path-t2] quit
[*PE1] commit
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] mpls te path explicit-path t1
[*PE1-Tunnel10] quit
[*PE1] commit
[~PE1] interface Tunnel 20
[*PE1-Tunnel20] mpls te path explicit-path t2
[*PE1-Tunnel20] quit
[*PE1] commit
# Configure PE2. Specify a physical interface on the P as the first next hop and a physical interface on PE1 as the second next hop to ensure that the two tunnels are built over different links.
[~PE2] explicit-path t1
[*PE2-explicit-path-t1] next hop 10.1.4.1
[*PE2-explicit-path-t1] next hop 10.1.2.1
[*PE2-explicit-path-t1] quit
[*PE2] commit
[~PE2] explicit-path t2
[*PE2-explicit-path-t2] next hop 10.1.5.1
[*PE2-explicit-path-t2] next hop 10.1.3.1
[*PE2-explicit-path-t2] quit
[*PE2] commit
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] mpls te path explicit-path t1
[*PE2-Tunnel10] quit
[*PE2] commit
[~PE2] interface Tunnel 20
[*PE2-Tunnel20] mpls te path explicit-path t2
[*PE2-Tunnel20] quit
[*PE2] commit
- Configure a remote LDP session.
Establish a remote LDP session between PE1 and PE2.
# Configure PE1.[~PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] mpls ldp remote-peer DTB1
[*PE1-mpls-ldp-remote-DTB1] remote-ip 4.4.4.9
[*PE1-mpls-ldp-remote-DTB1] quit
[*PE1] commit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] mpls ldp remote-peer DTB2
[*PE2-mpls-ldp-remote-DTB2] remote-ip 1.1.1.9
[*PE2-mpls-ldp-remote-DTB2] quit
[*PE2] commit
After completing this step, run the display mpls ldp peer command. A remote LDP session has been established between the two PEs.
The following example uses the command output on PE1.
<PE1> display mpls ldp peer
LDP Peer Information in Public network An asterisk (*) before a peer means the peer is being deleted. ------------------------------------------------------------------------------ PeerID TransportAddress DiscoverySource ------------------------------------------------------------------------------ 4.4.4.9:0 4.4.4.9 Remote Peer : DTB1 ------------------------------------------------------------------------------ TOTAL: 1 Peer(s) Found.
- Configure a tunnel policy.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
- Enable MPLS L2VPN on PEs.
# Configure PE1.
[~PE1] mpls l2vpn
[*PE1-l2vpn] quit
[*PE1] commit
# Configure PE2.
[~PE2] mpls l2vpn
[*PE2-l2vpn] quit
[*PE2] commit
- Create a VSI and bind it to the tunnel policy on PEs.
# Configure PE1.
[~PE1] vsi a2 static
[*PE1-vsi-a2] pwsignal ldp
[*PE1-vsi-a2-ldp] vsi-id 2
[*PE1-vsi-a2-ldp] peer 4.4.4.9 tnl-policy p1
[*PE1-vsi-a2-ldp] quit
[*PE1-vsi-a2] quit
[*PE1] commit
# Configure PE2.
[~PE2] vsi a2 static
[*PE2-vsi-a2] pwsignal ldp
[*PE2-vsi-a2-ldp] vsi-id 2
[*PE2-vsi-a2-ldp] peer 1.1.1.9 tnl-policy p1
[*PE2-vsi-a2-ldp] quit
[*PE2-vsi-a2] quit
[*PE2] commit
- Bind the VSI to the interfaces of the PEs.
# Configure PE1.
[~PE1] interface gigabitethernet0/1/10.1
[*PE1-GigabitEthernet0/1/10.1] vlan-type dot1q 10
[*PE1-GigabitEthernet0/1/10.1] l2 binding vsi a2
[*PE1-GigabitEthernet0/1/10.1] trust upstream default
[*PE1-GigabitEthernet0/1/10.1] trust 8021p
[*PE1-GigabitEthernet0/1/10.1] undo shutdown
[*PE1-GigabitEthernet0/1/10.1] quit
# Configure PE2.
[~PE2] interface gigabitethernet0/1/10.1
[*PE2-GigabitEthernet0/1/10.1] vlan-type dot1q 10
[*PE2-GigabitEthernet0/1/10.1] l2 binding vsi a2
[*PE2-GigabitEthernet0/1/10.1] trust upstream default
[*PE2-GigabitEthernet0/1/10.1] trust 8021p
[*PE2-GigabitEthernet0/1/10.1] undo shutdown
[*PE2-GigabitEthernet0/1/10.1] quit
# Configure CE1.
[~CE1] interface gigabitethernet0/1/0.1
[*CE1-GigabitEthernet0/1/0.1] shutdown
[*CE1-GigabitEthernet0/1/0.1] vlan-type dot1q 10
[*CE1-GigabitEthernet0/1/0.1] ip address 10.1.1.1 255.255.255.0
[*CE1-GigabitEthernet0/1/0.1] undo shutdown
[*CE1-GigabitEthernet0/1/0.1] quit
# Configure CE2.
[~CE2] interface gigabitethernet0/1/0.1
[*CE2-GigabitEthernet0/1/0.1]shutdown
[*CE2-GigabitEthernet0/1/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet0/1/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet0/1/0.1] undo shutdown
[*CE2-GigabitEthernet0/1/0.1] quit
- Verify the configuration.
After completing the preceding configurations, run the display vsi name a2 verbose command on PE1. The command output shows that VSI State is up and that there are two tunnel IDs, indicating that two tunnels have been established between PE1 and PE2.
<PE1> display vsi name a2 verbose
***VSI Name : a2 Administrator VSI : no Isolate Spoken : disable VSI Index : 0 PW Signaling : ldp Member Discovery Style : static PW MAC Learn Style : unqualify Encapsulation Type : vlan MTU : 1500 ...... VSI State : up ...... VSI ID : 2 ...... *Peer Router ID : 4.4.4.9 VC Label : 162816 Peer Type : dynamic Session : up Tunnel ID : 0xc2060404 0xc2060405 ...... **PW Information: *Peer Ip Address : 4.4.4.9 PW State : up Local VC Label : 162816 Remote VC Label : 162816 PW Type : label Tunnel ID : 0xc2060404 0xc2060405 ......
Configuration Files
PE1 configuration file
# sysname PE1 # mpls lsr-id 1.1.1.9 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path t1 next hop 10.1.2.2 next hop 10.1.4.2 # explicit-path t2 next hop 10.1.3.2 next hop 10.1.5.2 # mpls l2vpn # mpls ldp # ipv4-family # mpls ldp remote-peer DTB1 remote-ip 4.4.4.9 # vsi a2 static pwsignal ldp vsi-id 2 peer 4.4.4.9 tnl-policy p1 # interface GigabitEthernet0/1/10.1 undo shutdown vlan-type dot1q 10 l2 binding vsi a2 trust upstream default trust 8021p # interface GigabitEthernet0/1/9 undo shutdown ip address 10.1.2.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/11 undo shutdown ip address 10.1.3.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 1.1.1.9 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te path explicit-path t1 mpls te tunnel-id 100 mpls te service-class af1 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 4.4.4.9 mpls te path explicit-path t2 mpls te tunnel-id 200 mpls te service-class af2 # ospf 1 opaque-capability enable area 0.0.0.0 network 1.1.1.9 0.0.0.0 network 10.1.2.0 0.0.0.255 network 10.1.3.0 0.0.0.255 mpls-te enable # tunnel-policy p1 tunnel select-seq cr-lsp load-balance-number 2 # return
P1 configuration file
# sysname P1 # mpls lsr-id 2.2.2.9 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/17 undo shutdown ip address 10.1.2.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/25 undo shutdown ip address 10.1.4.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 2.2.2.9 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 2.2.2.9 0.0.0.0 network 10.1.2.0 0.0.0.255 network 10.1.4.0 0.0.0.255 mpls-te enable # return
P2 configuration file
# sysname P2 # mpls lsr-id 3.3.3.9 # mpls mpls te mpls rsvp-te mpls te cspf # interface GigabitEthernet0/1/19 undo shutdown ip address 10.1.3.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/24 undo shutdown ip address 10.1.5.1 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 3.3.3.9 255.255.255.255 # ospf 1 opaque-capability enable area 0.0.0.0 network 3.3.3.9 0.0.0.0 network 10.1.3.0 0.0.0.255 network 10.1.5.0 0.0.0.255 mpls-te enable # return
PE2 configuration file
# sysname PE2 # mpls lsr-id 4.4.4.9 # mpls mpls te mpls rsvp-te mpls te cspf # explicit-path t1 next hop 10.1.4.1 next hop 10.1.2.1 # explicit-path t2 next hop 10.1.5.1 next hop 10.1.3.1 # mpls l2vpn # vsi a2 static pwsignal ldp vsi-id 2 peer 1.1.1.9 tnl-policy p1 # mpls ldp # ipv4-family # mpls ldp remote-peer DTB2 remote-ip 1.1.1.9 # interface GigabitEthernet0/1/10.1 undo shutdown vlan-type dot1q 10 l2 binding vsi a2 trust upstream default trust 8021p # interface GigabitEthernet0/1/33 undo shutdown ip address 10.1.4.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface GigabitEthernet0/1/32 undo shutdown ip address 10.1.5.2 255.255.255.0 mpls mpls te mpls rsvp-te # interface LoopBack1 ip address 4.4.4.9 255.255.255.255 # interface Tunnel10 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te path explicit-path t1 mpls te tunnel-id 100 mpls te service-class af1 # interface Tunnel20 ip address unnumbered interface LoopBack1 tunnel-protocol mpls te destination 1.1.1.9 mpls te path explicit-path t2 mpls te tunnel-id 200 mpls te service-class af2 # ospf 1 opaque-capability enable area 0.0.0.0 network 4.4.4.9 0.0.0.0 network 10.1.4.0 0.0.0.255 network 10.1.5.0 0.0.0.255 mpls-te enable # tunnel-policy p1 tunnel select-seq cr-lsp load-balance-number 2 # return
CE1 configuration file
# sysname CE1 # interface GigabitEthernet0/1/0.1 undo shutdown vlan-type dot1q 10 ip address 10.1.1.1 255.255.255.0 # return
CE2 configuration file
# sysname CE2 # interface GigabitEthernet0/1/0.1 undo shutdown vlan-type dot1q 10 ip address 10.1.1.2 255.255.255.0 # return
- MPLS TE Description
- Overview of MPLS TE
- MPLS TE Fundamentals
- Tunnel Optimization
- IP-Prefix Tunnel
- MPLS TE Reliability
- MPLS TE Security
- DS-TE
- Entropy Label
- Checking the Source Interface of a Static CR-LSP
- Static Bidirectional Co-routed LSPs
- Associated Bidirectional CR-LSPs
- CBTS
- P2MP TE
- Application Scenarios for MPLS TE
- Terminology for MPLS TE
- MPLS TE Configuration
- Overview of MPLS TE
- Configuration Precautions for MPLS TE
- Configuring Static CR-LSP
- Enabling MPLS TE
- (Optional) Configuring Link Bandwidth
- Configuring the MPLS TE Tunnel Interface
- (Optional) Configuring Global Dynamic Bandwidth Pre-Verification
- Configuring the Ingress of the Static CR-LSP
- (Optional) Configuring the Transit Node of the Static CR-LSP
- Configuring the Egress of the Static CR-LSP
- (Optional) Configuring a Device to Check the Source Interface of a Static CR-LSP
- Verifying the Static CR-LSP Configuration
- Configuring a Static Bidirectional Co-routed LSP
- Enabling MPLS TE
- (Optional) Configuring Link Bandwidth
- Configuring a Tunnel Interface on the Ingress
- (Optional) Configuring Global Dynamic BandwidthPre-verification
- Configuring the Ingress of a Static Bidirectional Co-routed LSP
- (Optional) Configuring a Transit Node of a Static Bidirectional Co-routed LSP
- Configuring the Egress of a Static Bidirectional Co-routed CR-LSP
- Configuring the Tunnel Interface on the Egress
- Verifying the Configuration of a Static Bidirectional Co-routed LSP
- Configuring an Associated Bidirectional CR-LSP
- Configuring CR-LSP Backup
- Configuring CR-LSP Backup Parameters
- (Optional) Configuring a Best-effort Path
- (Optional) Configuring a Traffic Switching Policy for a Hot-Standby CR-LSP
- (Optional) Configuring a Manual Switching Mechanism for a Primary/Hot-Standby CR-LSP
- (Optional) Configuring CSPF Fast Switching
- (Optional) Enabling the Coexistence of Rapid FRR Switching and MPLS TE HSB
- Verifying the CR-LSP Backup Configuration
- Configuring Static BFD for CR-LSP
- Configuring Dynamic BFD for CR-LSP
- Configuring an RSVP-TE Tunnel
- Enabling MPLS TE and RSVP-TE
- Configuring CSPF
- Configuring IGP TE (OSPF or IS-IS)
- (Optional) Configuring TE Attributes for a Link
- (Optional) Configuring an Explicit Path
- (Optional) Disabling TE LSP Flapping Suppression
- Configuring an MPLS TE Tunnel Interface
- (Optional) Configuring Soft Preemption for RSVP-TE Tunnels
- (Optional) Configuring Graceful Shutdown
- Verifying the RSVP-TE Tunnel Configuration
- Configuring an Automatic RSVP-TE Tunnel
- Enabling MPLS TE and RSVP-TE
- (Optional) Configuring CSPF
- Configuring IGP TE (OSPF or IS-IS)
- Configuring the Automatic RSVP-TE Tunnel Capability on a PCC
- Configuring Dynamic BFD For Initiated RSVP-TE LSP
- Configuring Dynamic BFD for Initiated RSVP-TE Tunnel
- (Optional) Enabling Traffic Statistics Collection for Automatic Tunnels
- Verifying the Automatic RSVP-TE Tunnel Configuration
- Adjusting RSVP Signaling Parameters
- Configuring Dynamic BFD for RSVP
- Configuring Self-Ping for RSVP-TE
- Configuring RSVP Authentication
- Configuring Whitelist Session-CAR for RSVP-TE
- Configuring Micro-Isolation Protocol CAR for RSVP-TE
- Configuring an RSVP GR Helper
- Configuring the Entropy Label for Tunnels
- Configuring an LSR to Deeply Parse IP Packets
- Enabling the Entropy Label Capability on the Egress of an LSP
- Configuring the Entropy Label for Global Tunnels
- (Optional) Configuring an Entropy Label Capability for a Tunnel in the Tunnel Interface View
- Verifying the Configuration of the Entropy Label for Tunnels
- Configuring the IP-Prefix Tunnel Function
- Configuring Dynamic Bandwidth Reservation
- Adjusting Parameters for Establishing an MPLS TE Tunnel
- Configuring an MPLS TE Explicit Path
- Setting Priority Values for an MPLS TE Tunnel
- Setting the Hop Limit for a CR-LSP
- Associating CR-LSP Establishment with the Overload Setting
- Configuring Route and Label Record
- Setting Switching and Deletion Delays
- Verifying the Configuration of Establishment of MPLS TE Tunnel
- Importing Traffic to an MPLS TE Tunnel
- Configuring Static BFD for TE
- Configuring MPLS TE Manual FRR
- Configuring MPLS TE Auto FRR
- Configuring MPLS Detour FRR
- Disabling MPLS Detour FRR
- Configuring a Tunnel Protection Group
- Configuring an MPLS TE Associated Tunnel Group
- Configuring Bandwidth Information Flooding for MPLS TE
- Configuring the Limit Rate of MPLS TE Traffic
- Configuring Tunnel Re-optimization
- Configuring Isolated LSP Computation
- Configuring Automatic Tunnel Bandwidth Adjustment
- Disabling Automatic Bandwidth Configuration for Physical Interfaces
- Disabling the Rerouting Function
- Locking the Tunnel Configuration
- Configuring P2MP TE Tunnels
- Enabling P2MP TE Globally
- (Optional) Disabling P2MP TE on an Interface
- (Optional) Setting Leaf Switching and Deletion Delays
- Configuring Leaf Lists
- Configuring a P2MP TE Tunnel Interface
- (Optional) Configuring a P2MP Tunnel Template
- (Optional) Configuring a P2MP TE Tunnel to Support Soft Preemption
- (Optional) Configuring the Reliability Enhancement Function for a P2MP Tunnel
- Verifying the P2MP TE Tunnel Configuration
- Configuring BFD for P2MP TE
- Configuring P2MP TE FRR
- Configuring P2MP TE Auto FRR
- Configuring DS-TE
- Maintaining MPLS TE
- Checking Connectivity of a TE Tunnel
- Checking a TE Tunnel Using NQA
- Checking Tunnel Error Information
- Deleting RSVP-TE Statistics
- Resetting the RSVP Process
- Deleting an Automatic Bypass Tunnel and Re-establishing a New One
- Loopback Detection for a Specified Static Bidirectional Co-Routed CR-LSP
- Enabling the Packet Loss-Free MPLS ECMP Switchback
- Configuration Examples for MPLS TE
- Example for Establishing a Static MPLS TE Tunnel
- Example for Configuring a Static Bidirectional Co-routed CR-LSP
- Example for Configuring an Associated Bidirectional Static CR-LSP
- Example for Configuring an RSVP-TE Tunnel
- Example for Configuring an RSVP-TE over GRE Tunnel
- Example for Configuring RSVP Authentication
- Example for Configuring the IP-Prefix Tunnel Function to Automatically Establish MPLS TE Tunnels in a Batch
- Example for Configuring the Affinity Attribute of an MPLS TE Tunnel
- Example for Configuring an Inter-area Tunnel
- Example for Configuring the Threshold for Flooding Bandwidth Information
- Example for Configuring MPLS TE Manual FRR
- Example for Configuring MPLS TE Auto FRR
- Example for Configuring MPLS Detour FRR
- Example for Configure a Hot-standby CR-LSP
- Example for Configuring a Tunnel Protection Group Consisting of Bidirectional Co-routed CR-LSPs
- Example for Configuring Isolated LSP Computation
- Example for Configuring Static BFD for CR-LSP
- Example for Configuring Dynamic BFD for CR-LSP
- Example for Configuring Static BFD for TE
- Example for Configuring BFD for RSVP
- Example for Configuring a P2MP TE Tunnel
- Example for Configuring the IETF DS-TE Mode (RDM)
- Example for Configuring CBTS in an L3VPN over TE Scenario
- Example for Configuring CBTS in an L3VPN over LDP over TE Scenario
- Example for Configuring CBTS in a VLL over TE Scenario
- Example for Configuring CBTS in a VPLS over TE Scenario