Congestion Avoidance and Congestion Management Configuration Overview
Congestion avoidance prevents a network from being overloaded using a packet discarding policy. Congestion management ensures that high-priority services are preferentially processed based on the specified packet scheduling sequence.
On a traditional network, quality of service (QoS) is affected by network congestion. Congestion means the low data forwarding rate and delay resulting from insufficient network resources. Congestion results in delay of packet transmission, low throughput rate, and high resource consumption. Congestion frequently occurs in a complex networking environment where packet transmission and provision of various services are both required.
Congestion avoidance and congestion management are two flow control mechanisms for resolving congestion on a network.
Congestion Avoidance
Congestion avoidance is a flow control mechanism. A system configured with congestion avoidance monitors network resources such as queues and memory buffers. When congestion occurs or aggravates, the system discards packets.
The device supports the following congestion avoidance features:
Tail drop
Tail drop is the traditional congestion avoidance mechanism that processes all packets equally without classifying the packets into different types. When congestion occurs, packets at the end of a queue are discarded until the congestion problem is solved.
Tail drop causes global TCP synchronization. In tail drop mechanism, all newly arrived packets are dropped when congestion occurs, causing all TCP sessions to simultaneously enter the slow start state and the packet transmission to slow down. Then all TCP sessions restart their transmission at roughly the same time and then congestion occurs again, causing another burst of packet drops, and all TCP sessions enters the slow start state again. The behavior cycles constantly, severely reducing the network resource usage.
SRED
Only S2700-52P-EI, S2700-52P-PWR-EI, S2710SI, S3700SI, and S3700EI support SRED.
To avoid global TCP synchronization, Random Early Detection (RED) is used. The RED mechanism randomly discards packets so that the transmission speed of multiple TCP connections is not reduced simultaneously. In this manner, global TCP synchronization is prevented. The rate of TCP traffic and network traffic become stable.
Simple Random Early Detection (SRED) is implemented based on RED. SRED colors packets in red and yellow on the outbound queue of an interface based on packet priorities, and applies the drop threshold and drop probability to red and yellow packets to implement congestion avoidance.
Based on SRED, the device discards packets in a queue based on the drop probability to adjust the rate of outgoing traffic on the interface.
Congestion Management
The S2700SI does not support congestion management.
Other S2700EI models except S2700-52P-EI and S2700-52P-PWR-EI do not support WDRR or PQ+WDRR scheduling.
When a network is congested intermittently and delay-sensitive services require higher bandwidth than other services, congestion management adjusts the scheduling order of packets.
PQ scheduling
Priority queuing (PQ) schedules packets in descending order of priority. Packets in queues with a low priority can be scheduled only after all packets in queues with a high priority have been scheduled.
By using PQ scheduling, the device puts packets of delay-sensitive services into queues with higher priorities and packets of other services into queues with lower priorities so that packets of delay-sensitive services are preferentially scheduled.
The disadvantage of PQ is that the packets in lower-priority queues are not processed until all the higher-priority queues are empty. As a result, a congested higher-priority queue causes all lower-priority queues to starve out.
WRR scheduling
Weighted Deficit Round Robin (WDRR) implementation is similar to WRR implementation.
For example, eight queues are configured on an interface. Each queue is configured with a weight, namely, w7, w6, w5, w4, w3, w2, w1, and w0. The weight value represents the percentage of obtaining resources. This example assumes that the weights of queues on a 100M interface are 50, 50, 30, 30, 10, 10, 10, and 10, which correspond to w7, w6, w5, w4, w3, w2, w1, and w0 respectively. The queue with the lowest priority can obtain at least 5 Mbit/s bandwidth. This ensures that packets in all the queues can be scheduled.
In addition, WRR can dynamically change the time of scheduling packets in queues. For example, if a queue is empty, WRR ignores this queue and starts to schedule the next queue. This ensures efficient use of bandwidth.
WRR scheduling has two disadvantages:
WRR schedules packets based on the number of packets, whereas users concern the bandwidth. When the average packet length in each queue is the same or known, users can obtain the required bandwidth by setting WRR weight values. When the average packet length in each queue is variable, users cannot obtain the required bandwidth by setting WRR weight values.
Delay-sensitive services, such as voice services, cannot be scheduled in a timely manner.
WDRR scheduling
Deficit Round Robin (WDRR) implementation is similar to WRR implementation.
The difference between WDRR and WRR is as follows: WRR schedules packets based on the number of packets, whereas WDRR schedules packets based on the packet length. If the packet length is too long, WDRR allows the negative weight value so that long packets can be scheduled. In the next round, the queue with the negative weight value is not scheduled until its weight value becomes positive.
WDRR offsets the disadvantages of PQ scheduling and WRR scheduling. That is, in PQ scheduling, packets in queues with lower priorities cannot be scheduled for a long time, in WRR scheduling, bandwidth is allocated improperly when the packet length of each queue is different or variable.
WDRR cannot schedule delay-sensitive services such as voice services in a timely manner.
When the all the queues participating in WDRR scheduling have the same weights, the result of WDRR scheduling is the same as that of DRR scheduling.
PQ+WRR/PQ+WDRR scheduling
PQ, WRR, and WDRR have their own advantages and disadvantages. If only PQ scheduling is used, packets in the queues with a low priority may not obtain bandwidth for a long time. If only WRR or WDRR scheduling is used, delay-sensitive services, such as voice services, cannot be scheduled in a timely manner. PQ+WRR or PQ+WDRR scheduling integrates the advantages of PQ scheduling and WRR or WDRR scheduling and can avoid their disadvantages.
By using PQ+WRR or PQ+WDRR scheduling, the device puts important packets such as protocol packets and packets of delay-sensitive services to the queue using PQ scheduling, and allocates bandwidth to the queue. Then, the device can put other packets into the queues using WRR or WDRR scheduling based on the packet priority. Packets in queues using WRR or WDRR scheduling can be scheduled based on weight values in turn.