No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.

Configuration Guide - QoS
CloudEngine 12800 and 12800E V200R002C50

This document describes the configurations of QoS functions, including MQC, priority mapping, traffic policing, traffic shaping, interface-based rate limiting, congestion avoidance, congestion management, packet filtering, redirection, traffic statistics, and ACL-based simplified traffic policy.

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Overview of Congestion Avoidance and Congestion Management

Overview of Congestion Avoidance and Congestion Management

Congestion avoidance prevents a network from being overloaded using a packet discarding policy. Congestion management ensures that high-priority services are preferentially processed based on the specified packet scheduling sequence.

On a traditional network, quality of service (QoS) is affected by network congestion. Congestion means the low data forwarding rate and delay resulting from insufficient network resources. Congestion results in delay of packet transmission, low throughput rate, and high resource consumption. Congestion frequently occurs in a complex networking environment where packet transmission and provision of various services are both required.

Congestion avoidance and congestion management are two flow control mechanisms for resolving congestion on a network.

Congestion Avoidance

Congestion avoidance is a flow control mechanism. A system configured with congestion avoidance monitors network resources such as queues and memory buffers. When congestion occurs or aggravates, the system discards packets.

The device supports the following congestion avoidance features:

  • Tail drop

    Tail drop is the traditional congestion avoidance mechanism that processes all packets equally without classifying the packets into different types. When congestion occurs, packets at the end of a queue are discarded until the congestion problem is solved.

    Tail drop causes global TCP synchronization. In tail drop mechanism, all newly arrived packets are dropped when congestion occurs, causing all TCP sessions to simultaneously enter the slow start state and the packet transmission to slow down. Then all TCP sessions restart their transmission at roughly the same time and then congestion occurs again, causing another burst of packet drops, and all TCP sessions enters the slow start state again. The behavior cycles constantly, severely reducing the network resource usage.

    By default, an interface uses tail drop.

  • WRED

    Weighted Random Early Detection (WRED) randomly discards packets based on drop parameters. WRED defines different drop policies for packets of different services. WRED discards packets based on packet priorities, so the drop probability of packets with higher priorities is low. In addition, WRED randomly discards packets so that rates of TCP connections are reduced at different times. This prevents global TCP synchronization.

    WRED defines upper and lower threshold for the length of each queue. The packet drop policy is as follows:

    • When the length of a queue is shorter than the lower threshold, no packet is discarded.

    • When the length of a queue exceeds the upper threshold, all received packets are discarded.

    • When the length of a queue ranges from the lower threshold to the upper threshold, incoming packets are discarded randomly. RED generates a random number for each incoming packet and compares it with the drop probability of the current queue. If the random number is greater than the drop probability, the packet is discarded. A longer queue indicates a higher drop probability.

    Congestion avoidance is valid for only known unicast TCP packets.

Congestion Management

When a network is congested intermittently and delay-sensitive services require higher bandwidth than other services, congestion management adjusts the scheduling order of packets.

The device supports the following congestion management features:
  • PQ scheduling

    Priority queuing (PQ) schedules packets in descending order of priority. Packets in queues with a low priority can be scheduled only after all packets in queues with a high priority have been scheduled.

    By using PQ scheduling, the device puts packets of delay-sensitive services into queues with higher priorities and packets of other services into queues with lower priorities so that packets of delay-sensitive services are preferentially scheduled.

    The disadvantage of PQ is that the packets in lower-priority queues are not processed until all the higher-priority queues are empty. As a result, a congested higher-priority queue causes all lower-priority queues to starve out.

  • WFQ scheduling

    Fair Queuing (FQ) equally allocates network resources so that the delay and jitter of all flows are minimized.
    • Packets in different queues are scheduled fairly. The delays of all flows have slight difference.
    • Packets with different sizes are scheduled fairly. If many large and small packets in different queues need to be sent, small packets are scheduled first so that the total packet jitter of each flow is reduced.

    Compared with FQ, WFQ use priorities to schedule packets. WFQ schedules more packets with higher priorities than packets with lower priorities.

    Before packets enter queues, WFQ classifies the packets based on:
    • Session information

      WFQ classifies flows based on the session information including the protocol type, source and destination TCP or UDP port numbers, source and destination IP addresses, and precedence field in the ToS field. In addition, WFQ provides a large number of queues and evenly puts flows into queues to smooth out the delay. When flows leave queues, WFQ allocates the bandwidth on the outbound interface for each flow based on the precedence of each flow. Flows with the lowest precedence value obtain the least bandwidth. Flows with the largest precedence value obtain the highest bandwidth.

    • Priority

      The priority mapping technique marks local priorities for traffic and each local priority maps a queue number. Each interface is allocated four or eight queues and packets enter queues based on the queue number. By default, weights of queues are the same and traffic evenly shares the interface bandwidth. Users can change weights so that high-priority and low-priority packets are allocated bandwidth by weight percentage.

  • PQ+WFQ scheduling

    PQ and WFQ have their own advantages and disadvantages. If only PQ scheduling is used, packets in the queues with a low priority cannot obtain bandwidth. If only WFQ scheduling is used, delay-sensitive services, such as voice services, cannot be scheduled in a timely manner. PQ+WFQ scheduling integrates the advantages of PQ scheduling and WFQ scheduling and offsets their disadvantages.

    By using PQ+WFQ scheduling, the device puts protocol packets and packets of delay-sensitive services to the PQ queue, and allocates bandwidth to the PQ queue. Then, the device can put other packets into WFQ queues based on the packet priority. Packets in WRR or DRR queues can be scheduled based on weight values in turn.

Updated: 2019-03-21

Document ID: EDOC1000166605

Views: 66244

Downloads: 84

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Previous Next