No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - QoS

CloudEngine 12800 and 12800E V200R003C00

This document describes the configurations of QoS functions, including MQC, priority mapping, traffic policing, traffic shaping, interface-based rate limiting, congestion avoidance, congestion management, packet filtering, redirection, traffic statistics, and ACL-based simplified traffic policy.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Congestion Management

Congestion Management

As increasing network services are emerging and people are demanding higher network quality, limited bandwidth cannot meet network requirements. As a result, the delay and signal loss occur because of congestion. When a network is congested intermittently and delay-sensitive services require higher QoS than delay-insensitive services, congestion management is required. If congestion persists on the network after congestion management is configured, the bandwidth needs to be increased. Congestion management implements queuing and scheduling when sending packet flows.

A device interface supports Priority Queuing (PQ), Deficit Round Robin (DRR), PQ+DRR, WFQ, and PQ+WFQ for congestion management.

NOTE:

A device interface supports PQ, DRR, and PQ+DRR for congestion management on the CE12800E. On the other models, a device interface supports PQ, WFQ, and PQ+WFQ for congestion management.

On the device, there are eight queues on each interface in the outbound direction, which are identified by index numbers. The index numbers range from 0 to 7. Based on the mappings between local priorities and queues, the device sends the classified packets to queues, and then schedules the packets using queue scheduling mechanisms.

  • PQ scheduling

    PQ scheduling is designed for core services, and is applied to the queues in descending order of priorities. Queues with lower priories are processed only after all the queues with higher priorities are empty. In PQ scheduling, packets of core services are placed into a queue of a higher priority, and packets of non-core services such as email services are placed into a queue of a lower priority. Core services are processed first, and non-core services are sent at intervals when core services are not processed.

    As shown in Figure 6-4, the priorities of queues 7 to 0 are in descending order of priorities. The packets in queue 7 are processed first. The scheduler processes packets in queue 6 only after queue 7 becomes empty. The packets in queue 6 are sent at the link rate when packets in queue 6 need to be sent and queue 7 is empty. The packets in queue 5 are sent at the link rate when queue 6 and queue 7 are empty, and so on.

    PQ scheduling is valid for short-delay services. Assume that data flow X is mapped to the queue of the highest priority on each node. When packets of data flow X reach a node, the packets are processed first.

    The PQ scheduling mechanism, however, may result in starvation of packets in queues with lower priorities. For example, if data flows mapped to queue 7 arrive at 100% link rate in a period, the scheduler does not process flows in queue 6 and queues 0 to 5.

    To prevent starvation of packets in some queues, upstream devices need to accurately define service characteristics of data flows so that service flows mapped to queue 7 do not exceed a certain percentage of the link capacity. By doing this, queue 7 is not full and the scheduler can process packets in queues with lower priorities.

    Figure 6-4 PQ scheduling

  • DRR scheduling

    DRR is also based on RR. DRR solves the WRR problem. In WRR scheduling, a large-sized packet obtains less bandwidth than a small-sized packet. DRR schedules packets considering the packet length, ensuring that packets are scheduled equally.

    Deficit indicates the bandwidth deficit of each queue. The initial value is 0. The system allocates bandwidth to each queue based on the weight and calculates the deficit. If the deficit of a queue is greater than 0, the queue participates in scheduling. The device sends a packet and calculates the deficit based on the length of the sent packet. If the deficit of a queue is smaller than 0, the queue does not participate in scheduling. The current deficit is used as the basis for the next round of scheduling.

    Figure 6-5 Queue weights

    In Figure 6-5, the weights of Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 are set to 40, 30, 20, 10, 40, 30, 20, and 10 respectively. During scheduling, Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 obtain 20%, 15%, 10%, 5%, 20%, 15%, 10%, and 5% of the bandwidth respectively. Q7 and Q6 are used as examples to describe DRR scheduling. Assume that Q7 obtains 400 bytes/s bandwidth and Q6 obtains 300 bytes/s bandwidth.

    • First round of scheduling

      Deficit[7][1] = 0+400 = 400

      Deficit[6][1] = 0+300 = 300

      After packet of 900 bytes in Q7 and packet of 400 bytes in Q6 are sent, the values are as follows:

      Deficit[7][1] = 400-900 =-500

      Deficit[6][1] = 300-400 =-100

    • Second round of scheduling

      Deficit [7][2] = -500 + 400 = -100

      Deficit [6][2] = -100 + 300 = 200

      Packet in Q7 is not scheduled because the deficit of Q7 is negative. After the packet of 300 bytes in Q6 is sent, the value is as follows:

      Deficit [6][2] = 200-300 =-100

    • Third round of scheduling

      Deficit[7][3] = -100+400 = 300

      Deficit[6][3] = -100+300 = 200

      Packet of 600 bytes in Q7 and packet of 500 bytes in Q6 are sent, the values are as follows:

      Deficit[7][3] = 300-600 =-300

      Deficit[6][3] = 200-500 =-300

      Such a process is repeated and finally Q7 and Q6 respectively obtain 20% and 15% of the bandwidth. This illustrates that you can obtain the required bandwidth by setting the weights.

    In DRR scheduling, short-delay services still cannot be scheduled in time.

  • WFQ scheduling

    Fair Queuing (FQ) equally allocates network resources so that the delay and jitter of all flows are minimized.
    • Packets in different queues are scheduled fairly. The delays of all flows have slight difference.
    • Packets with different sizes are scheduled fairly. If many large and small packets in different queues need to be sent, small packets are scheduled first so that the total packet jitter of each flow is reduced.

    Compared with FQ, WFQ schedules packets based on priorities. WFQ schedules packets with higher priorities before packets with lower priorities.

    Before packets enter queues, WFQ classifies the packets based on:
    • Session information

      WFQ classifies flows based on the session information including the protocol type, source and destination TCP or User Datagram Protocol (UDP) port numbers, source and destination IP addresses, and precedence field in the ToS field. Additionally, the system provides a large number of queues and equally places flows into queues to smooth out the delay. When flows leave queues, WFQ allocates the bandwidth on the outbound interface for each flow based on the precedence of each flow. Flows with the lowest priorities obtain the least bandwidth. Only the packets matching the default traffic classifier in Class-based queueing (CBQ) can be classified based on session information.

    • Priority

      The priority mapping technique marks local priorities for traffic and each local priority maps a queue number. Each interface is allocated four or eight queues and packets enter queues. By default, queue weights are the same and traffic equally shares the interface bandwidth. Users can change weights so that high-priority and low-priority packets are allocated bandwidth based on weight percentage.

    Figure 6-6 WFQ scheduling

  • PQ+DRR scheduling

    Similar to PQ+WRR, PQ+DRR scheduling offsets disadvantages of PQ scheduling and DRR scheduling. If only PQ scheduling is used, packets in queues with lower priorities cannot obtain bandwidth for a long period of time. If only DRR scheduling is used, short-delay services such as voice services cannot be scheduled first. PQ+DRR scheduling has advantages of both PQ and DRR scheduling and offsets their disadvantages.

    Eight queues on the device interface are classified into two groups. You can specify PQ scheduling for certain groups and DRR scheduling for other groups.

    Figure 6-7 PQ+DRR scheduling

    As shown in Figure 6-7, the device first schedules traffic in queues 7, 6, and 5 in PQ mode. After traffic scheduling in queues 7, 6, and 5 is complete, the device schedules traffic in queues 4, 3, 2, 1, and 0 in DRR mode. Queues 4, 3, 2, 1, and 0 have their own weight.

    Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using DRR scheduling.

  • PQ+WFQ scheduling

    Similar to PQ+WRR, PQ+WFQ scheduling has advantages of PQ scheduling and WFQ scheduling and offsets their disadvantages. If only PQ scheduling is used, packets in queues with lower priorities cannot obtain bandwidth for a long period of time. If only WFQ scheduling is used, short-delay services such as voice services cannot be scheduled first. To solve the problem, configure PQ+WFQ scheduling.

    Eight queues on the device interface are classified into two groups. You can specify PQ scheduling for certain groups and WFQ scheduling for other groups.

    Figure 6-8 PQ+WFQ scheduling

    As shown in Figure 6-8, the device first schedules traffic in queue 7, queue 6, and queue 5 in PQ mode. After traffic scheduling in queues 7, 6, and 5 is complete, the device schedules traffic in queues 4, 3, 2, 1, and 0 in WFQ mode. Queues 4, 3, 2, 1, and 0 have their own weights.

    Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using WFQ scheduling.

Translation
Download
Updated: 2019-05-05

Document ID: EDOC1100004202

Views: 31460

Downloads: 26

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next