No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Eth-trunk load balance causes service degradation for Nodes B

Publication Date:  2012-12-03 Views:  40 Downloads:  0
Issue Description
We have the following topology for Backhaul service:
NodeB-GWT (Tellabs)-GWD(NE40E-8 Huawei)---SWC(NE80E)--RNC. It is observed some service affectation, users complains the 3G service is slow. NodesB monitoring system shows KPI from RNC and Node B (congestion of iuB, percentage from 0 to 100, when 0 there is no jitter, no delay).




From customer monitoring system we have the following KPI image:



Handling Process
The configuration between SWD (NE40E-8) and SWC (NE80E) is an eth-trunk in L3, and load balancing is configured:

<NE40E-8>disp cur int eth-trunk
#
interface Eth-Trunk4
 mtu 1558
 description LACP to SWC-NE80E
 load-balance packet-all
#
interface Eth-Trunk4.2301
 vlan-type dot1q 2301
 mtu 1558
 ip address 10.112.33.194 255.255.255.252
 isis enable 1
 isis circuit-type p2p
 isis circuit-level level-2
 isis authentication-mode md5 O)D*R\G'PDGQ=^Q`MAF4<1!!
 isis ldp-sync
 isis bfd enable
 mpls
 mpls te
 mpls rsvp-te
 mpls ldp
 trust upstream backbone
 statistic enable
#


<NE80E>
interface Eth-Trunk4
 mtu 1558
 description LACP to SWD-NE40E-8
 load-balance packet-all
#
interface Eth-Trunk4.2301
 vlan-type dot1q 2301
 mtu 1558
 ip address 10.112.33.193 255.255.255.252
 isis enable 1
 isis circuit-type p2p
 isis circuit-level level-2
 isis authentication-mode md5 O)D*R\G'PDGQ=^Q`MAF4<1!!
 isis ldp-sync                            
 isis bfd enable                          
 mpls                                     
 mpls te                                  
 mpls rsvp-te                             
 mpls ldp                                 
 trust upstream backbone                  
 statistic enable                         
#                                         



    


Root Cause
load balance per packet.
Solution
After analyzing the information and sympton, we can see that the load balancing is causing the problem:

When there is no eth-trunk the frames which belong to a certain flow can arrive its destination in the correct order because there is only one physical connection between both devices. If we use multiple physical links in eth-trunk, the frames can arrive its destination in an incorrect order. 
The reason is that the frames are transmitted through different links. If the first frame is transmitted over one link and the second frame over the other link, the second frame may arrive its destination before the first one
TO avoid disorder in the frames, we can use a mechanism of paquet forwarding to guarantee the correct order of the frames which belong to a certain data flow. This mechanism clasifies the frames based on MAC address or IP address. The frames which belong to the same data flow are transmitted over only one physical link.

After changing the load balance mode from load balance per paquet to load balance per flow, the problem is solved, we can see the KPI image, the value goes to 0 after the change:





Suggestions
It is suggested for Voice or sensitive traffic to use load balance per flow rather than load balance per packet.

END