SRv6 TE Policy
Definition
SRv6 TE Policy is a tunneling technology developed based on SRv6. An SRv6 TE Policy is a set of candidate paths consisting of one or more segment lists, that is, segment ID (SID) lists. Each SID list identifies an end-to-end path from the source to the destination, instructing a device to forward traffic through the path rather than the shortest path computed using an IGP. The header of a packet steered into an SRv6 TE Policy is augmented with an ordered list of segments associated with that SRv6 TE Policy, so that other devices on the network can execute the instructions encapsulated into the list.
Headend: node where an SRv6 TE Policy is generated.
Color: extended community attribute of an SRv6 TE Policy. A BGP route can recurse to an SRv6 TE Policy if they have the same color attribute.
Endpoint: destination address of an SRv6 TE Policy.
Color and endpoint information is added to an SRv6 TE Policy through configuration. The headend steers traffic into an SRv6 TE Policy whose color and endpoint attributes match the color value and next-hop address in the associated route, respectively. The color attribute defines an application-level network Service Level Agreement (SLA) policy. This allows network paths to be planned based on specific SLA requirements for services, realizing service value in a refined manner, and helping build new business models.
SRv6 TE Policy Model
The SRv6 TE Policy model is the same as the SR-MPLS TE Policy model, as shown in Figure 3-46. One SRv6 TE Policy may contain multiple candidate paths with the preference attribute. The valid candidate path with the highest preference functions as the primary path of the SRv6 TE Policy.
A candidate path can contain multiple segment lists, each of which carries a Weight attribute. Each segment list is an explicit SID stack that instructs a network device to forward packets, and multiple segment lists can work in load balancing mode.
SRv6 TE Policy Creation
An SRv6 TE Policy can be manually configured on a forwarder through CLI or NETCONF. Alternatively, it can be delivered to a forwarder after being dynamically generated by a protocol, such as BGP, on a controller. The dynamic mode facilitates network deployment. If the same SRv6 TE Policies generated in both modes exist, the forwarder selects an SRv6 TE Policy based on the same rules adopted for SR-MPLS TE Policy selection. For details about SR-MPLS TE Policy selection rules, see SR-MPLS TE Policy Creation.
An SRv6 TE Policy can be created using End SIDs, End.X SIDs, anycast SIDs, binding SIDs, or a combination of them.
Figure 3-47 shows an example of manually configuring an SRv6 TE Policy through CLI or NETCONF. For a manually configured SRv6 TE Policy, information, such as the endpoint and color attributes, the preference values of candidate paths, and segment lists, must be configured. Moreover, the preference values must be unique. The first-hop SID of a segment list can be an End SID or End.X SID, but cannot be any binding SID.
Figure 3-48 shows the process in which a controller dynamically generates and delivers an SRv6 TE Policy to a forwarder. The process is as follows:
The controller collects information, such as network topology and SID information, through BGP-LS.
The controller and headend forwarder establish a BGP peer relationship in the IPv6 SR Policy address family.
The controller computes an SRv6 TE Policy and delivers it to the headend forwarder through the BGP peer relationship. The headend forwarder then generates SRv6 TE Policy entries.
If an SRv6 TE Policy is manually configured, you can specify a binding SID in the static locator range for the SRv6 TE Policy. If a controller delivers an SRv6 TE Policy to a forwarder, the SRv6 TE Policy itself does not carry any binding SID when being delivered. After receiving the SRv6 TE Policy, the forwarder randomly allocates a binding SID in the dynamic locator range to the SRv6 TE Policy and then reports the policy's status information carrying the binding SID to the controller through BGP-LS. In so doing, the controller can obtain the binding SID of the SRv6 TE Policy and use the binding SID to orchestrate an SRv6 path.
Traffic Steering into an SRv6 TE Policy
Route coloring is to add the Color Extended Community to a route through a route-policy, enabling the route to recurse to an SRv6 TE Policy based on the color value and next-hop address in the route.
Configure a route-policy and set a specific color value for the desired route.
Apply the route-policy to a BGP peer or a VPN instance as an import or export policy.
SRv6 TE Policy-based Data Forwarding
The controller delivers an SRv6 TE Policy to headend device PE1.
Endpoint device PE2 advertises BGP VPNv4 route 10.2.2.2/32 to PE1. The next-hop address in the BGP route is the address of PE2, that is, 2001:db8::1/128.
A tunnel policy is configured on PE1. After receiving the BGP route, PE1 recurses the route to the SRv6 TE Policy based on the color value and next-hop address in the route. The SID list of the SRv6 TE Policy is <2::1, 3::1, 4::1>, which can also be expressed as (4::1, 3::1, 2::1) in data forwarding scenarios.
After receiving a common unicast packet from CE1, PE1 searches the routing table of the corresponding VPN instance and finds that the outbound interface of the route is an SRv6 TE Policy interface. Then, PE1 inserts an SRH carrying the SID list of the SRv6 TE Policy and encapsulates an IPv6 header into the packet. After completing the operations, PE1 forwards the packet to P1.
Transit nodes P1 and P2 forward the packet hop by hop based on the SRH information.
After the packet arrives at PE2, PE2 searches its "My Local SID Table" for a matching End SID based on the IPv6 destination address (DA) 4::1 in the packet. According to the instruction bound to the SID, PE2 decreases the SL field value of the packet by 1 and updates the IPv6 DA to the VPN SID 4::100.
Based on the VPN SID 4::100, PE2 searches its "My Local SID Table" for a matching End.DT4 SID. According to the instruction bound to the SID, PE2 decapsulates the packet, pops the SRH and IPv6 header, searches the routing table of the VPN instance corresponding to the VPN SID 4::100 based on the destination address in the packet payload, and forwards the packet to CE2.
SRv6 TE Policy Failover
MBB and Delayed Deletion for SRv6 TE Policies
SRv6 TE Policies support make-before-break (MBB), enabling a forwarder to establish a new segment list before deleting the original. During the establishment process, traffic continues to be forwarded using the original segment list, which is deleted only after a specified delay elapses. This prevents packet loss during a segment list switchover.
This delayed deletion mechanism takes effect only for up segment lists (including backup segment lists) in an SRv6 TE Policy.
SRv6 TE Policy Failover
On the network shown in Figure 3-51, an SRv6 TE Policy is deployed between PE1 and PE3 and between PE2 and PE4. Figure 3-51 lists possible failure points and corresponding protection schemes.
Failure Point |
Protection Scheme |
---|---|
1 or 8 |
When the BGP peer relationships between the controller and all forwarders go down, the SRv6 TE Policy on PE1 is deleted. As a result, the corresponding VPNv4 route cannot recurse to the SRv6 TE Policy. Because route recursion is still required, the VPNv4 route recurses to an SRv6 BE path (if any exists on the network). In this hard convergence scenario, packet loss occurs when traffic is switched to the SRv6 BE path. |
2 or 3 |
An access-side equal-cost multi-path (ECMP) switchover is performed on CE1 to switch traffic to PE2 for forwarding. |
4, 5, or 6 |
Assume that a BGP peer relationship is established between PE1 and PE3 using loopback addresses. If point 4, 5, or 6 is faulty, the loopback addresses are still reachable through P2, keeping the BGP peer relationship uninterrupted and the route sent from PE3 to PE1 undeleted. In this case, a fast reroute (FRR) switchover is performed within the SRv6 TE Policy. |
7 |
1. If PE3 is faulty, PE1 continuously sends traffic to P1 before detecting the fault. SRv6 egress protection can be configured on P1, enabling it to push a mirror SID into packets and forward them to PE4. 2. After PE1 detects the PE3 fault, the BGP peer relationship established between PE1 and PE3 is interrupted. The BGP module on PE1 deletes the BGP route received from PE3, selects a route advertised through PE4, and switches traffic to PE4. |
Egress Protection
Services that recurse to an SRv6 TE Policy must be strictly forwarded through the path defined using the segment lists in the SRv6 TE Policy. Considering that the egress of the SRv6 TE Policy is also fixed, if a single point of failure occurs on the egress, service forwarding may fail. To prevent this problem, configure egress protection for the SRv6 TE Policy.
Figure 3-52 shows a typical SRv6 egress protection scenario where an SRv6 TE Policy is deployed between PE3 and PE1. PE1 is the egress of the SRv6 TE Policy, and PE2 provides protection for PE1 to enhance reliability.
SRv6 egress protection is implemented as follows:
- Locators A1::/64 and A2::/64 are configured on PE1 and PE2, respectively.
- An IPv6 VPNv4 peer relationship is established between PE3 and PE1 as well as between PE2 and PE1. A VPN instance VPN1 and SRv6 VPN SIDs are configured on both PE1 and PE2. In addition, IPv4 prefix SID advertisement is enabled on the two PEs.
- After receiving the IPv4 route advertised by CE2, PE1 encapsulates the route as a VPNv4 route and sends it to PE3. The route carries the VPN SID, RT, RD, and color information.
- A mirror SID is configured on PE2 to protect PE1, generating a <Mirror SID, Locator> mapping entry (for example, <A2::100, A1::>).
- PE2 propagates the mirror SID through an IGP and generates a local SID entry. After receiving the mirror SID, P1 generates an FRR entry with the next-hop address being PE2 and the action being Push mirror SID. In addition, P1 generates a low-priority route that cannot be recursed.
- BGP subscribes to mirror SID configuration. After receiving the VPNv4 route from PE1, PE2 leaks the route into the routing table of VPN1 based on the RT, and matches the VPN SID carried by the route against the local <Mirror SID, Locator> table. If a matching locator is found according to the longest match rule, PE2 generates a <Remote SRv6 VPN SID, VPN> entry.
In the data forwarding phase:
- In normal situations, PE3 forwards traffic to the private network through the PE3-P1-PE1-CE2 path. If PE1 fails, P1 detects that the next hop PE1 is unreachable and switches traffic to the FRR path.
- P1 pushes the mirror SID into the packet header and forwards the packet to PE2. After parsing the received packet, PE2 obtains the mirror SID, queries the local SID table, and finds that the instruction specified by the mirror SID is to query the remote SRv6 VPN SID table. As instructed, PE2 queries the remote SRv6 VPN SID table based on the VPN SID in the packet and finds the corresponding VPN instance. PE2 then searches the VPN routing table and forwards the packet to CE2.
If PE1 fails, the peer relationship between PE2 and PE1 is also interrupted. As a result, the VPN route received by PE2 from PE1 is deleted, causing the <Remote SRv6 VPN SID, VPN> entry to be deleted and ultimately resulting in problems. To prevent this, enable GR on PE2 and PE1 to maintain routes or enable delayed deletion for the <Remote SRv6 VPN SID, VPN> entry on PE2.
TTL Processing by an SRv6 TE Policy
In scenarios where public IP routes are recursed to SRv6 TE Policies, TTLs are processed in either of the following modes:
Uniform mode: The headend reduces the TTL value in an inner packet by 1 and maps it to the IPv6 TTL field. The TTL is then processed on the IPv6 network in a standard way. The endpoint reduces the IPv6 TTL value by 1 and maps it to the TTL field in the inner packet.
Pipe mode: The TTL value in an inner packet is reduced by 1 only on the headend and endpoint, with the entire SRv6 TE Policy being treated as one hop.