No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

NE40E for current network burst traffic lead to port shaping packet loss

Publication Date:  2014-10-22 Views:  39 Downloads:  0
Issue Description

Current network has "Port shaping 16" configuration in NE40E GE interface, and ef can occupy the 70% bandwidth of 16. That means when the network has congestion ef service can ensure around 11M bandwidth.

But it was reported when the device has a 2M ef traffic in its uplink interface, there has packet drop in the interface.


Interface GigabitEthernet8 / 1 10
Description link to * leading TTSL, Bangalore, AR1 *
Undo shutdown
Port shaping 16
Port-queue af3 wfq weight 20 outbound
Port-queue af4 wfq weight 10 outbound
Port-queue ef pq shaping shaping, percentage 70 outbound

Handling Process

Current network service was impacted due to this packet drop, according to the FSE after the shaping configuration was deleted, the service restored.

So it is supposed that the packet drop problem was due to network traffic burst.

Root Cause

Current network service was impacted due to this packet drop, according to the FSE after the shaping configuration was deleted, the service restored.

So it is supposed that the packet drop problem was due to network traffic burst.


Solution

1. Here is the information collected, but it was a average traffic/packet drop within 30s, cannot get the real-time information.


[BLRP01-GigabitEthernet8/1/10]dis port-queue statistics int GigabitEthernet8/1/10 ef outbound
[ef]
Total pass. 132,551 packets, 80,046,580 bytes
Total discard: 4,137 packets, 3,623,249 bytes
Drop tail discard: 4,137 packets, 3,623,249 bytes

Wred discard: 0 packets, 0 bytes
Last 30 seconds pass rate: 355 pps, 2,247,744 bps
Last 30 seconds discard rate: 15 pps, 100,296 bps
Drop tail discard rate: 15 pps, 100,296 bps
Wred discard rate 0 pps, 0 bps

 

2. Analysis.
Shaping uses Token Bucket for scheduling, that means when a packet come in, it will be cached to a queue which has a bucket in its exit. Token always be filled into the bucket routinely(consider as traffic), when the packet come in and its packet length <= the token quantity in the bucket, then the packet can pass, and the token quantity in the bucket will be decreased according to this packet length. So, the speed of the token being filled in equals to the packet output rate.

Commonly, queue cache length equals to the burst tolerance capacity. The default queue cache length for shaping is 1440 packets in the current device, it cannot be modified. If the network traffic exceeds to this cache, there will have packet drop and the device cannot make any optimization from its own configuration.  

 

3. Test result in the site and laboratory
Simulated traffic burst by software in the current netowrk, and it was found during the packet drop the queue cache was almost full.


[SALP01-diagnose]efu tm 8 0 q cq sta 82 1 e
------------------------------CQ State Information -----------------------------
CQID = 82
Memory cell count: 1177 // Max is 1440 
Queue depth: 295
Tail discard counter:
Color Green Yellow Red UsrDef
Counter 0x0 0x0 0x0 0x0
Backup Pressure: TP=10, COS = 2, BP of port: 0, BP of CQ to FQ: 0
[SALP01-diagnose]efu tm 8 0 q cq sta 82 1 e

Cache overflow is a momentary issue, 1177 was captured means the real traffic burst should be far more than 1177.

The test in the laboratory also reproduced this problem:


<40E-32>dis port-q statistics interface gi 8/0/0 ef o
[ef]
Total pass: 115,702 packets, 13,421,432 bytes
Total discard: 11,798 packets, 1,474,625 bytes
Drop tail discard: 11,798 packets, 1,474,625 bytes
Wred discard: 0 packets, 0 bytes
Last 30 seconds pass rate: 2,940 pps, 2,728,824 bps
Last 30 seconds discard rate: 217 pps, 217,144 bps
Drop tail discard rate: 217 pps, 217,144 bps
Wred discard rate: 0 pps, 0 bps


TM queue congestion information:
[40E-32-diagnose]efu tm 8 0 q cq sta 2 1 e
------------------------------CQ State Information -----------------------------
CQID = 2
Memory cell count: 1439 //The queue is full 
Queue depth: 1439
Tail discard counter:
Color Green Yellow Red UsrDef
Counter 0x0 0x0 0x0 0x0
Backup Pressure: TP=0, COS = 2, BP of port: 0, BP of CQ to FQ: 0

According to the results, it was confirmed that the packet drop issue was due to traffic burst, and SHAPING function cannot meet the current service scenario.

Suggestions
No

END