No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

How Do I Adjust Eth-Trunk Configurations When Eth-Trunk Load Balancing Is Uneven

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
How Do I Adjust Eth-Trunk Configurations When Eth-Trunk Load Balancing Is Uneven

How Do I Adjust Eth-Trunk Configurations When Eth-Trunk Load Balancing Is Uneven

Introduction

This document describes how to determine whether Eth-Trunk load balancing is uneven and how to adjust Eth-Trunk configurations in this scenario.

Prerequisites

This document uses S6720EI running V200R010C00 as an example. There may be differences in the licensing requirements and limitations and command restrictions of different switch models and versions. For details, see the corresponding product documentation. For licensing requirements and limitations, see "Licensing Requirements and Limitations for Link Aggregation" under CLI-based Configuration Guide > Ethernet Switching Configuration Guide > Link Aggregation Configuration. For command restrictions, see Command Reference.

Determining Whether Eth-Trunk Load Balancing Is Uneven

The InUti and OutUti fields in the display interface brief command output show the average bandwidth usages of an interface in the last 300 seconds.

<HUAWEI> display interface brief
InUti/OutUti: input utility/output utility
Interface                   PHY   Protocol InUti OutUti   inErrors  outErrors
GigabitEthernet0/0/1        up    up       0.06%   0.06%       315        408
GigabitEthernet0/0/2        up    up       0.06%   0.07%       229        306

Normally, traffic is load balanced over multiple member links after an Eth-Trunk is deployed for load balancing. In this case, the bandwidth usage of each link does not exceed 80%, and the bandwidth usage difference between links does not exceed 30%. This effectively improves service reliability. If traffic is transmitted over only one link or load balanced over some links, but not all links of an Eth-Trunk, uneven load balancing occurs. As a result, the bandwidth usage of a single link may exceed 80% and services may be affected.

Eth-Trunk load balancing can be classified as packet-based load balancing and flow-based load balancing. Since the data volume of each traffic flow is different, no matter which load balancing mode is used, traffic cannot be absolutely evenly distributed to all member links of an Eth-Trunk. As per-packet load balancing may cause packet disorder, it is not recommended for services that are sensitive to packet sequence. In this case, you are advised to use flow-based load balancing which ensures the correct packet order. In V200R011C10, S series switches support only flow-based load balancing. For details, see Understanding Eth-Trunk Load Balancing of S Series Switches.

Troubleshooting Roadmap

If Eth-Trunk load balancing is uneven, perform the following steps to troubleshoot the fault.

  1. Check whether the interface with unbalanced traffic loads belongs to an inter-chassis Eth-Trunk in a CSS or stack.

    If so, adjust the configuration according to Configuring Load Balancing on an Inter-Chassis Eth-Trunk.

    If not, go to step 2.

  2. Check whether equal and weighted cost multi-path (ECMP) and Eth-Trunk load balancing are configured together on the device.

    If so, adjust the configuration according to Configuring ECMP and Eth-Trunk Load Balancing.

    If not, go to step 3.

  3. Check whether packets being load balanced are known or non-known unicast packets. If the destination MAC address of a packet is not in the MAC address table, the packet is a non-known unicast packet.

    NOTE:

    The load balancing mode takes effect only on the outbound interface of traffic. If traffic of a single type accounts for a much larger proportion than traffic of other types, you need to change the traffic path on the outbound interface of the upstream device to evenly distribute traffic to the local device or expand the outbound interface capacity of the local device, for example, changing a GE port to an XGE port.

    If so, adjust the Eth-Trunk load balancing configuration according to Configuring Load Balancing for Non-known Unicast Packets.

    If not, adjust the Eth-Trunk load balancing configuration according to Configuring Loading Balancing for Known Unicast Packets.

Configuring Loading Balancing for Known Unicast Packets

For known unicast traffic, you can preferentially configure common load balancing. If common load balancing cannot meet your requirements, configure enhanced load balancing. The recommended load balancing mode varies according to the traffic type. Select a proper load balancing mode according to Table 1-1 and perform the corresponding configuration. Unless otherwise specified, the number of member interfaces in an Eth-Trunk must be a multiple of two. For details about how to configure Eth-Trunk load balancing, see Configuring Common Load Balancing and Configuring Enhanced Load Balancing. For details about how to view the Protocol field in a packet, see Obtaining Packet Headers.

Table 1-1 Recommended configurations for load balancing of different types of traffic

Traffic Type

Traffic Type Identification Method

Load Balancing Mode

Common Layer 2 traffic

The value of the Protocol field in a packet is a non-PPPoE Layer 2 protocol.

  1. Common mode: Default configuration
  2. Enhanced mode:
    • l2 field smac dmac
    • l2 field smac dmac sport
    • l2 field smac dmac l2-protocol
    • l2 field smac dmac l2-protocol sport

Common Layer 3 traffic

The value of the Protocol field in a packet is TCP/IP or a non-MPLS or non-GTP Layer 3 protocol.

  1. Common mode: Default configuration
  2. Enhanced mode:
    • ipv4 field sip dip
    • ipv4 field sip dip sport
    • ipv4 field sip dip protocol
    • ipv4 field sip dip l4-sport l4-dport
    • ipv4 field sip dip protocol l4-sport l4-dport sport

PPPoE traffic

The value of the Protocol field in a packet is PPPoE.

NOTE:

When PPPoE packets are load balanced, only the outer Layer 2 information of the packets can be parsed. The encapsulated information cannot be parsed.

  1. Common mode:
    • load-balance src-mac
    • load-balance src-dst-mac
  2. Enhanced mode:
    • l2 field smac
    • l2 field smac sport
    • l2 field smac dmac
    • l2 field smac dmac sport

GTP traffic

The value of the Protocol field in a packet is GTP.

NOTE:

When GTP packets are load balanced, only the outer IP information of the packets can be parsed. The encapsulated information cannot be parsed.

  1. Common mode:
    • load-balance src-ip
    • load-balance dst-ip
    • load-balance src-dst-ip
  2. Enhanced mode:
    • ipv4 field sip
    • ipv4 field dip
    • ipv4 field sip dip
    • ipv4 field sip dip sport
    • ipv4 field sip dip l4-sport l4-dport

MPLS traffic

The value of the Protocol field in a packet is MPLS.

Only the enhanced mode can be used. Route selection for Eth-Trunk load balancing is prior to MPLS encapsulation. Therefore, only the incoming packets with labels can be load balanced based on MPLS labels. According to MPLS network roles in Figure 1-1, adjust configurations in Table 1-2 for the outbound interfaces where traffic is unevenly load balanced.

Figure 1-1 Typical MPLS networking
Table 1-2 Recommended load balancing modes for uneven traffic load balancing on an MPLS network

MPLS Network Role

Enhanced Load Balancing

PE1

L2VPN scenario:

  • l2 field smac dmac
  • l2 field smac dmac sport
  • l2 field smac dmac l2-protocol
  • l2 field smac dmac l2-protocol sport

L3VPN scenario:

  • ipv4 field sip dip
  • ipv4 field sip dip sport
  • ipv4 field sip dip protocol
  • ipv4 field sip dip l4-sport l4-dport
  • ipv4 field sip dip protocol l4-sport l4-dport sport

P

NOTE:

In this scenario, most cards can load balance traffic only based on MPLS labels, and cannot parse the IP information in packets added with MPLS labels.

mpls field [ 2nd-label | dip | sip | sport | top-label ] *

PE2

NOTE:

In this scenario, some cards can load balance traffic only based on the Layer 2 information or IP information in original packets, rather than based on MPLS labels.

L2VPN scenario: mpls field [ 2nd-label | dmac | smac | sport | top-label | vlan ] *

L3VPN scenario: mpls field [ 2nd-label | dip | sip | sport | top-label ] *

Configuring Common Load Balancing

  1. Run the system-view command to enter the system view.
  2. Run the interface eth-trunk trunk-id command to enter the Eth-Trunk interface view.
  3. Run the load-balance { dst-ip | dst-mac | src-ip | src-mac | src-dst-ip | src-dst-mac } command to configure the Eth-Trunk load balancing mode according to Table 1-1.

    By default, traffic is load balanced based on src-dst-ip.

The following example shows how to configure load balancing based on source and destination MAC addresses on Eth-Trunk 1 and verify the configuration.

<HUAWEI> system-view
[HUAWEI] interface eth-trunk 1
[HUAWEI-Eth-Trunk1] load-balance src-dst-mac
[HUAWEI-Eth-Trunk1] quit
[HUAWEI] display eth-trunk 1 load-balance
Eth-Trunk1's load-balance information:
 Load-balance Configuration: SA-XOR-DA
 Load-balance options used per-protocol:
  L2  : Source XOR Destination MAC address, Vlan ID, Ethertype, Ingress-port
  IPv4: Source XOR Destination MAC address, Vlan ID, Ethertype, Ingress-port
  IPv6: Source XOR Destination MAC address, Vlan ID, Ethertype, Ingress-port
  MPLS: Source XOR Destination MAC address, Vlan ID, Ethertype, Ingress-port

In the preceding information, the Load-balance Configuration field is displayed as SA-XOR-DA, indicating that Eth-Trunk1 load balances traffic based on source and destination MAC addresses.

Table 1-3 Description of the display eth-trunk load-balance command output

Item

Description

Load-balance Configuration

Configured load balancing mode. The options are as follows:

  • SIP: Eth-Trunk load balancing based on source IP addresses.
  • DIP: Eth-Trunk load balancing based on destination IP addresses.
  • SIP-XOR-DIP: Eth-Trunk load balancing based on source and destination IP addresses.
  • SA: Eth-Trunk load balancing based on source MAC addresses.
  • DA: Eth-Trunk load balancing based on destination MAC addresses.
  • SA-XOR-DA: Eth-Trunk load balancing based on source and destination MAC addresses.
  • ENHANCED: Enhanced Eth-Trunk load balancing.
  • DIFFLUENCE: Eth-Trunk-based distribution of packets with the same source address and destination address.

Load-balance options used per-protocol

Load balancing parameters of different types of packets.

Configuring Enhanced Load Balancing

  1. Run the system-view command to enter the system view.
  2. Run the load-balance-profile profile-name command to create a load balancing profile and enter the profile view. Only one load balancing profile can be created.
  3. You can configure load balancing modes for Layer 2, IPv4, IPv6, and MPLS packets separately. The following operations are optional. You can perform one or more of the following steps to configure the Eth-Trunk load balancing mode according to Table 1-1.

    • Run the l2 field [ dmac | l2-protocol | smac | sport | vlan ] * command to set a load balancing mode for Layer 2 packets. By default, Layer 2 packets are load balanced based on the source MAC address (smac) and destination MAC address (dmac).
    • Run the ipv4 field [ dip | l4-dport | l4-sport | protocol | sip | sport | vlan ] * command to set a load balancing mode for IPv4 packets in a specified load balancing profile. By default, IPv4 packets are load balanced based on the source IP address (sip) and destination IP address (dip).
    • Run the ipv6 field [ dip | l4-dport | l4-sport | protocol | sip | sport | vlan ] * command to set a load balancing mode for IPv6 packets in a specified load balancing profile. By default, IPv6 packets are load balanced based on the source IP address (sip) and destination IP address (dip).
    • Run the mpls field [ 2nd-label | 3rd-label | dip | dmac | l4-dport | l4-sport | protocol | sip | smac | sport | top-label | vlan ] * command to set a load balancing mode for MPLS packets in a specified load balancing profile. By default, MPLS packets are load balanced based on the two outer labels (top-label and 2nd-label) of each packet.

  4. Run the quit command to return to the system view.
  5. Run the interface eth-trunk trunk-id command to enter the Eth-Trunk interface view.
  6. Run the load-balance enhanced profile profile-name command to apply the load balancing profile.

The following example shows how to create a load balancing profile a, load balance IPv6 packets based on sip and protocol, apply the load balancing profile to Eth-Trunk 1, and verify the configuration.

<HUAWEI> system-view
[HUAWEI] load-balance-profile a
[HUAWEI-load-balance-profile-a] ipv6 field sip protocol
[HUAWEI-load-balance-profile-a] quit
[HUAWEI] interface eth-trunk 1
[HUAWEI-Eth-Trunk1] load-balance enhanced profile a
[HUAWEI-Eth-Trunk1] quit
[HUAWEI] display eth-trunk 1 load-balance
Eth-Trunk1's load-balance information:
 Load-balance Configuration: ENHANCED
 Load-balance enhanced profile: a
 Load-balance options used per-protocol:
  L2  : Source XOR Destination MAC address
  IPv4: Source XOR Destination IP address
  IPv6: Source IP address, IP protocol
  MPLS: Top XOR Second label

In the preceding information, the Load-balance Configuration field is displayed as ENHANCED, indicating that the enhanced load balancing mode is set on Eth-Trunk 1. If the IPv6 field in Load-balance options used per-protocol is displayed as Source IP address, IP protocol, IPv6 packets are load balanced based on the source IP address and protocol.

Configuring Load Balancing on an Inter-Chassis Eth-Trunk

In a CSS or stack, an inter-chassis Eth-Trunk is configured as the outbound interface of traffic to ensure reliable transmission. When the CSS or stack forwards traffic, the Eth-Trunk may select an inter-chassis member interface based on the hash algorithm. The cable bandwidth between devices in the CSS or stack is limited, so inter-chassis traffic forwarding occupies bandwidth resources between devices, lowering traffic forwarding efficiency. To solve this problem, inter-chassis Eth-Trunks on S series switches preferentially forward local traffic by default. It is normal that the bandwidth usage difference between interfaces on different devices exceeds 30%.

If active member interfaces of an Eth-Trunk have insufficient bandwidth to forward local traffic and the bandwidth usage of a single link exceeds 80%, disable the Eth-Trunk from preferentially forwarding local traffic; otherwise, traffic may be discarded.

If services are insensitive to forwarding performance, you can run the undo local-preference enable command in the Eth-Trunk view to disable the Eth-Trunk from preferentially forwarding local traffic, although member interfaces have sufficient bandwidth to forward local traffic. This enhances reliability and ensures the Eth-Trunk to carry burst traffic when a link fault occurs.

Configuring ECMP and Eth-Trunk Load Balancing

ECMP first distributes traffic to multiple Eth-Trunks and then Eth-Trunks further load balance traffic over member links. Ensure that ECMP evenly distributes traffic and then perform the following steps to adjust the Eth-Trunk load balancing configuration.

  1. If traffic is evenly distributed after common ECMP load balancing is performed, there is no limit on the Eth-Trunk load balancing mode. Continue to check for other possible causes that lead to uneven load balancing.
  2. If traffic is unevenly distributed after common ECMP load balancing is performed, configure enhanced ECMP load balancing. In this case, common Eth-Trunk load balancing is preferred.
  3. If traffic is unevenly distributed after enhanced ECMP load balancing and common Eth-Trunk loading balancing are performed, configure enhanced Eth-Trunk load balancing. In this case, the Eth-Trunk must have more effective hash factors than ECMP.
  4. If traffic is extremely unevenly distributed on the live network and a large number of hash factors are configured for both enhanced ECMP and Eth-Trunk load balancing, adjust the numbers of ECMP member paths and Eth-Trunk member links. Ensure that the numbers of ECMP member paths and Eth-Trunk member links do not have any common divisor. For example, the numbers cannot be 4 and 4, 4 and 2, or 6 and 3, but can be set to 3 and 4, or 4 and 5.

Configuring Load Balancing for Non-known Unicast Packets

Non-known unicast packets include unknown unicast, multicast, and broadcast packets. When a large number of non-known unicast packets are transmitted and unevenly distributed on interfaces, you can run the unknown-unicast load-balance enhanced command in the system view to apply an enhanced load balancing profile to implement enhanced load balancing.

NOTE:

To enable enhanced load balancing of non-known unicast traffic on modular switches, run the unknown-unicast load-balance enhanced lbid command. In V200R010C00, when the ES0D0G24SA00 and ES0D0G24CA00 of the S7700 or the EH1D2G24SSA0 and EH1D2S24CSA0 of the S9700 are used with other types of cards together, the unknown-unicast load-balance enhanced lbid command cannot be run. Otherwise, packet loss may occur or excess packets may be received.

Obtaining Packet Headers

The following provides two methods to obtain the Portal field in a packet header to determine the packet type or view other information carried in the packet header.

Obtaining Packet Headers in Mirroring Mode

If some traffic cannot be collected because the traffic rate is greater than the bandwidth of the observing port, you can still analyze the overall traffic model based on the collected traffic. If the packet obtaining device cannot carry a large volume of traffic, run the qos lr cir 100000 outbound command in the system view of the observing port on the switch to set a rate limit.

The following example shows how to mirror the traffic passing Eth-Trunk1 to GE 1/0/1.

<HUAWEI> system-view
[HUAWEI] observe-port 1 interface GigabitEthernet 1/0/1
[HUAWEI] interface eth-trunk 1
[HUAWEI-Eth-Trunk1] port-mirroring to observe-port 1 outbound

Obtaining Packet Headers Using the capture-packet Command

If you cannot obtain packet headers in mirroring mode, run the capture-packet interface interface-typeinterface-number destinationterminal command in the system view to obtain packet headers. You are advised to obtain packet headers in more than 50 groups at a specified interval and then summarize and analyze all the obtained packet headers.

NOTE:

The number of packet headers obtained by using the capture-packet command is relatively small and sometimes cannot reflect the actual traffic model on the live network.

Translation
Download
Updated: 2019-09-04

Document ID: EDOC1100097992

Views: 344

Downloads: 31

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next