Analysis for Load Balancing In Typical Scenarios
For the default hash factors of hash algorithm in typical load balance scenarios, see the chapter Appendix: Default Hash Factors.
MPLS L3VPN Scenario
MPLS L3VPN Typical Topology
- PE (Provider Edge): an edge device on the provider network, which is directly connected to the CE. The PE receives traffic from the CE and then encapsulates the traffic with MPLS header, and then sends the traffic to P. The PE also receives traffic from the P and then remove the MPLS header from the traffic, and then sends the traffic to CE.
- P (Provider): a backbone device on the provider network, which is not directly connected to the CE. Ps perform basic MPLS forwarding.
- CE (Customer Edge): an edge device on the private network.
Suitable Scenario 1: Load Balance on Ingress PE of L3VPN
The hash algorithm is performed based on the packet format of the inbound traffic from AC interface. The hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
Suitable Scenario 2: Load Balance on P Node
- If the number of MPLS labels in the packet is less than four, the hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
- In the complex scenarios such as inter-AS VPN, FRR and LDP over TE, the number of the labels in the packet may be four or more. In these scenarios, the hash factors are the layer 4 or layer 5 label. The result of the load balancing depends on the discreteness of the layer 4 or layer 5 labels of the packets.
Suitable Scenario 3: Load Balance on Egress PE of L3VPN
The hash algorithm of the load balance on egress PE is the same as Scenario 2 if the Penultimate Hop Popping (PHP) is disabled, the same as Scenario 1 if the Penultimate Hop Popping (PHP) is enabled.
Suitable Scenario 4: Load Balancing Among the L3 Outbound Interfaces in the Access of L2VPN to L3VPN Scenarios
In access of L2VPN to L3VPN scenarios, the hash algorithm is the same as Scenario 1.
VPLS Scenario
VPLS Typical Topology
- PE (Provider Edge): an edge device on the provider network, which is directly connected to the CE. The PE receives traffic from the CE and then encapsulates the Ethernet traffic with MPLS header, and then sends the traffic to P. The PE also receives traffic from the P and then remove the MPLS header from the traffic, and then sends the traffic to CE.
- P (Provider): a backbone device on the provider network, which is not directly connected to the CE. Ps perform basic MPLS forwarding.
- CE (Customer Edge): an edge device on the private network. CEs perform Ethernet/VLAN layer2 forwarding.
Suitable Scenario 1: Load Balance on Ingress PE of VPLS
- IP traffic: the hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
- Ethernet carrying Non-IP traffic, the hash factors can be MAC 2-tuple. The result of the load balancing depends on the discreteness of the MAC addresses of the packets. Some kind of board supports 3-tuple <source MAC, destination MAC, VC Label> if the inbound AC traffic is MPLS traffic and the AC interface is not QinQ sub-interface.
Suitable Scenario 2: Load Balance on P Node
- If the number of labels in the packet is less than four, the hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
- In the complex scenarios such as inter-AS VPN, FRR and LDP over TE, the number of the labels may be four or more. In these scenarios, the hash factors are the layer 4 or layer 5 label. The result of the load balancing depends on the discreteness of the layer 4 or layer 5 labels of the packets.
Suitable Scenario 3: Load Balance on Egress PE of VPLS
Egress PE of VPLS only supports Trunk load balancing because the Egress PE performs Ethernet/VLAN layer2 forwarding. There is no route load balancing on egress PE of VPLS.
- If the traffic is from MPLS to AC, the hash factors can be IP 5-tuple, IP 2-tuple or MAC 2-tuple. The default hash factors may be different in different board-types. Some boards only support MAC 2-tuple.
- If the traffic is from AC to AC, the hash algorithm is the same as Scenario 1.
Suitable Scenario 4: Load Balancing Among the L2 Outbound Interfaces in the Access of L2VPN to L3VPN Scenarios
In access of L2VPN to L3VPN scenarios, the hash algorithm is the same as Scenario 1.
VLL/PWE3 Scenario
VLL/PWE3 Typical Topology
- PE (Provider Edge): an edge device on the provider network, which is directly connected to the CE. The PE receives traffic from the CE and then encapsulates the traffic with MPLS header, and then sends the traffic to P. The PE also receives traffic from the P and then remove the MPLS header from the traffic, and then sends the traffic to CE.
- P (Provider): a backbone device on the provider network, which is not directly connected to the CE. Ps perform basic MPLS forwarding.
- CE (Customer Edge): an edge device on the private network.
Suitable Scenario 1: Load Balance on Ingress PE of VLL/PWE3
The hash algorithm is performed based on the packet format of the inbound traffic from AC interface.
- IP traffic: the hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
- Ethernet carrying Non-IP traffic, the hash factors can be MAC 2-tuple. The result of the load balancing depends on the discreteness of the MAC addresses of the packets.
- Non-Ethernet traffic: the hash factor is VC label in most boards.
Suitable Scenario 2: Load Balance on P Nodes
- If the number of labels in the packet is less than four, the hash factors can be IP 5-tuple or IP 2-tuple. The result of the load balancing depends on the discreteness of the private IP addresses or TCP/UDP ports of the packets.
- In the complex scenarios such as inter-AS VPN, FRR and LDP over TE, the number of the labels may be four or more. In these scenarios, the hash factors are the fourth or fifth label from the top. The result of the load balancing depends on the discreteness of the fourth or fifth label from the top.
Suitable Scenario 3: Load Balance on Egress PE of VLL/PWE3
Egress PE of VLL/PWE3 only supports Trunk load balancing because the virtual circuit (VC) of VLL/PWE3 is P2P.
- If the traffic is from AC to AC, the hash algorithm is the same as Scenario 1.
- If the traffic is from MPLS to AC, the hash factors can be IP 5-tuple, IP 2-tuple or VC label. The hash factors may be different in different board-types.
Suitable Scenario 4: Load Balancing Among the L2 Outbound Interfaces in the Access of L2VPN to L3VPN Scenarios
In access of L2VPN to L3VPN scenarios, the hash algorithm is the same as Scenario 1.
L2TP Scenario
About L2TP Tunnels
The Layer 2 Tunneling Protocol (L2TP) allows enterprise users, small-scale ISPs, and mobile office users to access a VPN over a public network (PSTN/ISDN) and the access network.
An L2TP tunnel involves three node types, as shown in Figure 8-47:
L2TP Access Concentrator (LAC): a network device capable of PPP and L2TP. It is usually an ISP's access device that provides access services for users over the PSTN/ISDN. An LAC uses L2TP to encapsulate the packets received from users before sending them to an LNS and decapsulates the packets received from the LNS being sending them to the users.
- L2TP Network Server (LNS): a network device that accepts and processes L2TP tunnel requests. Users can access VPN resources after they have been authenticated by the LNS. An LNS and an LAC are two endpoints of an L2TP tunnel. The LAC initiates an L2TP tunnel, whereas the LNS accepts L2TP tunnel requests. An LNS is usually deployed as an enterprise gateway or a PE on an IP public network.
- Transit node: a transmission device on the transit network between an LAC and an LNS. Various types of networks can be used as the transit networks, such as IP or MPLS networks.
Two Types of L2TP Traffic
L2TP Traffic has two types:
Control message: is used to establish, maintain or tear down the L2TP tunnel and sessions. The format of L2TP control message is shown as Figure 8-48.
If the transit nodes of L2TP tunnel use per-packet load balancing, the L2TP control messages may arrive out of order, this may result in the failure of L2TP tunnel establishment.
Data message: is used to transmit PPP frames over L2TP tunnel. The data message is not retransmitted if lost. The format of L2TP data message is shown as Figure 8-49.
Hash Result of L2TP Traffic
In L2TP scenarios, the traffic are added a new IP header by LAC node. The source IP address of the new IP header is the L2TP tunnel address of LAC node, and destination address of the new IP header is the L2TP tunnel address of the remote LNS. That is, the source IP address and destination IP address of the new IP header is unique. Therefore, the L2TP traffic is belongs to the same flow. The load balancing result depends on the number of the L2TP tunnels carrying the traffic. The more L2TP tunnels, the better result of load balancing.
GRE Scenarios
Generic Routing Encapsulation (GRE) provides a mechanism of encapsulating packets of a protocol into packets of another protocol. This allows packets to be transmitted over heterogeneous networks. The channel for transmitting heterogeneous packets is called a tunnel. In addition, GRE serves as a Layer 3 tunneling protocol of Virtual Private Networks (VPNs), and provides a tunnel for transparently transmitting VPN packets.
GRE can be used in the scenarios shown in Figure 8-50 to Figure 8-53.
In the scenarios stated above, the source IP addresses and the destination IP addresses of all packets in the GRE tunnel are the source address and the destination address of the GRE tunnel. Therefore, on any transit node or on egress node of the GRE tunnel, the TTLs in the outer IP headers of the GRE packets are the same. If a flow is carried by only one GRE tunnel and the load balancing mode is per-flow, the load balancing is not available. It is recommended to create multiple GRE tunnels to carry the flow.
IP Unicast Forwarding Scenarios
- the 2-tuple <source IP address, destination IP address>,
- the 4-tuple <source IP address, destination IP address, source port number, destination port number>,
- or the 5-tuple <source IP address, destination IP address, source port number, destination port number, and protocol number>.
Multicast Scenarios
Five load balancing policies are available for multicast traffic.
Multicast Group-based Load Balancing
Based on this policy, a multicast router uses the hash algorithm to select an optimal route among multiple equal-cost routes for a multicast group. Therefore, all traffic of a multicast group is transmitted on the same forwarding path, as shown in Figure 8-55.
This policy applies to a network that has one multicast source but multiple multicast groups.
Multicast Source-based Load Balancing
Based on this policy, a multicast router uses the hash algorithm to select an optimal route among multiple equal-cost routes for a multicast source. Therefore, all traffic of a multicast source is transmitted on the same forwarding path, as shown in Figure 8-56.
This policy applies to a network that has one multicast group but multiple multicast sources.
Multicast Source- and Group-based Load Balancing
Based on this policy, a multicast router uses the hash algorithm to select an optimal route among multiple equal-cost routes for each (S, G) entry. Therefore, all traffic matching a specific (S, G) entry is transmitted on the same forwarding path, as shown in Figure 8-57.
This policy applies to a network that has multiple multicast sources and groups.
Balance-Preferred
Based on this policy, a multicast router evenly distributes (*, G) and (S, G) entries on their corresponding equal-cost routes. This policy implements automatic load balancing adjustment in the following conditions: Equal-cost routes are added, deleted, or modified; multicast routing entries are added or deleted; the weights of equal-cost routes are changed.
This policy applies to a network on which multicast users frequently join or leave multicast groups.
Stable-Preferred
Based on this policy, a multicast router distributes (*, G) entries and (S, G) entries on their corresponding equal-cost routes. Therefore, stable-preferred is similar as the balance-preferred policy. This policy implements automatic load balancing adjustment when equal-cost routes are deleted. However, dynamic load balancing adjustment will not be performed when multicast routing entries are deleted or when weights of load balancing routes change.
This policy applies to a network that has stable multicast services.
Difference Between Balance-Preferred and Stable-Preferred
Both the balance-preferred and stable-preferred policies allow a multicast router to distribute multicast routing entries based on weights of equal-cost routes. If all equal-cost routes have the same weight, each equal-cost route will have the same number of multicast routing entries as others. However, when route flapping occurs, the load balancing adjustment results of the two policies will be different:
- Based on the balance-preferred policy, a multicast router takes load balancing as the most important issue, so that the router rapidly responds to a change in unicast routes, multicast routes, and weights of equal-cost routes.
- Based on the stable-preferred policy, a multicast router prevents unnecessary link switchovers to ensure stable services, so that the router rapidly responds to unicast delete requests but does not adjust load. After the unicast route flapping problem is resolved, the router selects optimal routes for subsequent services to resolve the imbalance problem gradually. Therefore, stable-preferred provides both stable and load-balanced services.