No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

CLI-based Configuration Guide - QoS

AR100-S, AR110-S, AR120-S, AR150-S, AR160-S, AR200-S, AR1200-S, AR2200-S, and AR3200-S V200R009

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Overview of Congestion Management and Congestion Avoidance

Overview of Congestion Management and Congestion Avoidance

Congestion avoidance prevents a network from being overloaded using a packet discarding policy. Congestion management ensures that high-priority services are preferentially processed based on the specified packet scheduling sequence.

On a traditional network, quality of service (QoS) is affected by network congestion. Congestion means the low data forwarding rate and delay resulting from insufficient network resources. Congestion results in delay of packet transmission, low throughput rate, and high resource consumption. Congestion frequently occurs in a complex networking environment where packet transmission and provision of various services are both required.

Congestion avoidance and congestion management are two flow control mechanisms for resolving congestion on a network.

Congestion Avoidance

Congestion avoidance is a flow control mechanism. A system configured with congestion avoidance monitors network resources such as queues and memory buffers. When congestion occurs or aggravates, the system discards packets.

The device supports the following congestion avoidance features:

  • Tail drop

    Tail drop is the traditional congestion avoidance mechanism that processes all packets equally without classifying the packets into different types. When congestion occurs, packets at the end of a queue are discarded until the congestion problem is solved.

    Tail drop causes global TCP synchronization. In tail drop mechanism, all newly arrived packets are dropped when congestion occurs, causing all TCP sessions to simultaneously enter the slow start state and the packet transmission to slow down. Then all TCP sessions restart their transmission at roughly the same time and then congestion occurs again, causing another burst of packet drops, and all TCP sessions enters the slow start state again. The behavior cycles constantly, severely reducing the network resource usage.

  • WRED

    Weighted Random Early Detection (WRED) randomly discards packets based on drop parameters. WRED defines different drop policies for packets of different services. WRED discards packets based on packet priorities, so the drop probability of packets with higher priorities is low. In addition, WRED randomly discards packets so that rates of TCP connections are reduced at different times. This prevents global TCP synchronization.

    WRED defines upper and lower threshold for the length of each queue. The packet drop policy is as follows:

    • When the length of a queue is shorter than the lower threshold, no packet is discarded.

    • When the length of a queue exceeds the upper threshold, all received packets are discarded.

    • When the length of a queue ranges from the lower threshold to the upper threshold, incoming packets are discarded randomly. RED generates a random number for each incoming packet and compares it with the drop probability of the current queue. If the random number is greater than the drop probability, the packet is discarded. A longer queue indicates a higher drop probability.

Congestion Management

When a network is congested intermittently and delay-sensitive services require higher bandwidth than other services, congestion management adjusts the scheduling order of packets.

The device supports the following congestion management features:
  • PQ scheduling

    Priority Queuing (PQ) schedules packets in descending order of priorities. Queues with lower priories are processed only after all the queues with higher priorities have been processed.

    By using PQ scheduling, the device puts packets of delay-sensitive services into queues with higher priorities and packets of other services into queues with lower priorities. In this manner, packets of key services can be transmitted first.

    PQ scheduling has a disadvantage. If a lot of packets exist in queues with higher priorities when congestion occurs, packets in queues with lower priorities cannot be transmitted for a long time.

  • WRR scheduling

    Weighted Round Robin (WRR) scheduling ensures that packets in all the queues are scheduled in turn.

    For example, eight queues are configured on an interface. Each queue is configured with a weight: w7, w6, w5, w4, w3, w2, w1, and w0. The weight value represents the percentage of obtaining resources. The following scenario assumes that the weights of queues on the 100M interface are 50, 50, 30, 30, 10, 10, 10, and 10, which match w7, w6, w5, w4, w3, w2, w1, and w0. Therefore, the queue with the lowest priority can obtain at least 5 Mbit/s bandwidth. This ensures that packets in all the queues can be scheduled.

    In addition, WRR can dynamically change the time of scheduling packets in queues. For example, if a queue is empty, WRR ignores this queue and starts to schedule the next queue. This ensures efficient use of bandwidth.

    WRR scheduling has two disadvantages:
    • WRR schedules packets based on the number of packets. When the average packet length in each queue is the same or known, you can obtain the required bandwidth by setting WRR weight values. When the average packet length in each queue is variable, you cannot obtain the required bandwidth by setting WRR weight values.
    • Delay-sensitive services, such as voice services, cannot be scheduled in a timely manner.
  • DRR scheduling

    Implementation of Deficit Round Robin (DRR) is similar to that of WRR.

    The difference between DRR and WRR is as follows: WRR schedules packets based on the number of packets, whereas DRR schedules packets based on the packet length. If the packet length is too long, DRR allows the negative weight value so that long packets can be scheduled. In the next round, the queue with the negative weight value is not scheduled until its weight value becomes positive.

    DRR offsets the disadvantages of PQ scheduling and WRR scheduling. That is, in PQ scheduling, packets in queues with lower priorities cannot be scheduled for a long time; in WRR scheduling, bandwidth is allocated improperly when the packet length of each queue is different or variable.

    DRR cannot schedule delay-sensitive services such as voice services in time.

  • WFQ scheduling

    Fair queuing (FQ) ensures that network resources are allocated evenly to optimize the delay and jitter of all flows. Weighted FQ (WFQ) schedules packets based on priorities, and schedules more packets with higher priorities than packets with lower priorities.

    WFQ can automatically classify flows based on the session information, including the protocol type, source and destination TCP or UDP port numbers, source and destination IP addresses, and precedence field in the Type of Service (ToS) field. In addition, WFQ provides a large number of queues and evenly puts flows into queues to smooth out the delay. When flows leave queues, WFQ allocates the bandwidth on the outbound interface for each flow based on the precedence of each flow. Flows with the lowest priorities obtain the least bandwidth.

  • PQ+WRR/PQ+DRR/PQ+WFQ scheduling

    PQ, WRR, DRR, and WFQ have their own advantages and disadvantages. If only PQ scheduling is used, packets in queues with lower priorities may not obtain bandwidth. If only WRR, DRR, or WFQ scheduling is used, delay-sensitive services cannot be scheduled in time. PQ+WRR, PQ+DRR, or PQ+WFQ scheduling integrates the advantages of PQ scheduling and WRR or DWRR scheduling and offsets their disadvantages.

    By using PQ+WRR, PQ+DRR, or PQ+WFQ scheduling, the device puts important packets, such as protocol packets and packets of delay-sensitive services to the PQ queue, and allocates bandwidth to the PQ queue. Then the device can put other packets into WRR, DRR, or WFQ queues based on the packet priority. Packets in WRR, DRR, or WFQ queues can be scheduled in turn.

  • CBQ scheduling

    Class-based queueing (CBQ) is an extension of WFQ and matches packets with traffic classifiers. CBQ classifies packets based on the IP precedence or Differentiated Services Code Point (DSCP) priority, inbound interface, or 5-tuple (protocol type, source IP address and mask, destination IP address and mask, source port range, and destination port range). Then CBQ puts packets into different queues. If packets do not match any configured traffic classifiers, CBQ matches packets with the default traffic classifier.

    CBQ provides the following types of queues:
    • Expedited Forwarding (EF) queues are applied to short-delay services.

      An EF queue has the highest priority. You can put one or more types of packets into EF queues and set different bandwidth for different types of packets.

      In addition to common EF queues, the device provides a special EF queue, LLQ queue with the shortest delay. Low Latency Queuing (LLQ) provides good QoS assurance for delay-sensitive services such as VoIP services.

      User Datagram Protocol (UDP) packets of VoIP services often exist in EF queues; therefore, use the tail drop method but not WRED.

    • Assured Forwarding (AF) queues are applied to key data services that require assured bandwidth.

      Each AF queue corresponds to one type of packets. You can set bandwidth for each type of packets. During scheduling, the system sends packets based on the configured bandwidth. AF implements fair scheduling. If an interface has remaining bandwidth, packets in AF queues obtain the remaining bandwidth based on weights. When congestion occurs, each type of packets can obtain the minimum bandwidth.

      If the length of an AF queue reaches the maximum value, the tail drop method is used by default. You can choose to use WRED.

    • Best-Effort (BE) queues are applied to best-effort services that require no strict QoS assurance.

      If packets do not match any configured traffic classifiers, packets match the default traffic classifier defined by the system. You are allowed to configure AF queues and bandwidth for the default traffic classifier, whereas BE queues are configured in most situations. BE uses WFQ scheduling so that the system schedules packets matching the default traffic classifier based on flows.

      If the length of a BE queue reaches the maximum value, the tail drop method is used by default. You can choose to use WRED.

    NOTE:

    After packet fragments are scheduled in queues, the device may randomly discard some packets. As a result, fragments fail to be reassembled.

Translation
Download
Updated: 2019-05-17

Document ID: EDOC1000174115

Views: 39321

Downloads: 28

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next