No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

CLI-based Configuration Guide - QoS

AR100-S, AR110-S, AR120-S, AR150-S, AR160-S, AR200-S, AR1200-S, AR2200-S, and AR3200-S V200R009

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Congestion Management

Congestion Management

As increasing network services are emerging and people are demanding higher network quality, limited bandwidth cannot meet network requirements. As a result, the delay and signal loss occur because of congestion. When a network is congested intermittently and delay-sensitive services require higher QoS than delay-insensitive services, congestion management is required. If congestion persists on the network after congestion management is configured, the bandwidth needs to be increased. Congestion management implements queuing and scheduling when sending packet flows.

Based on queuing and scheduling policies, WAN-side interfaces and layer 2 VE interfaces support Priority Queuing (PQ), Weighted Fair Queuing (WFQ), and PQ+WFQ. Other LAN-side interfaces on the device support PQ, DRR, PQ+DRR, WRR

On the device, there are four or eight queues on each interface in the outbound direction, which are identified by index numbers. The index numbers range from 0 to 3 or 0 to 7. Based on the mappings between local priorities and queues, the device sends the classified packets to queues, and then schedules the packets using queue scheduling mechanisms. The following examples use eight queues on each interface to describe each scheduling mode.

  • PQ scheduling

    PQ scheduling is designed for core services, and is applied to the queues in descending order of priorities. Queues with lower priories are processed only after all the queues with higher priorities are empty. In PQ scheduling, packets of core services are placed into a queue of a higher priority, and packets of non-core services such as email services are placed into a queue of a lower priority. Core services are processed first, and non-core services are sent at intervals when core services are not processed.

    As shown in Figure 4-3, the priorities of queues 7 to 0 are in descending order of priorities. The packets in queue 7 are processed first. The scheduler processes packets in queue 6 only after queue 7 becomes empty. The packets in queue 6 are sent at the link rate when packets in queue 6 need to be sent and queue 7 is empty. The packets in queue 5 are sent at the link rate when queue 6 and queue 7 are empty, and so on.

    PQ scheduling is valid for short-delay services. Assume that data flow X is mapped to the queue of the highest priority on each node. When packets of data flow X reach a node, the packets are processed first.

    The PQ scheduling mechanism, however, may result in starvation of packets in queues with lower priorities. For example, if data flows mapped to queue 7 arrive at 100% link rate in a period, the scheduler does not process flows in queue 6 and queues 0 to 5.

    To prevent starvation of packets in some queues, upstream devices need to accurately define service characteristics of data flows so that service flows mapped to queue 7 do not exceed a certain percentage of the link capacity. By doing this, queue 7 is not full and the scheduler can process packets in queues with lower priorities.

    Figure 4-3  PQ scheduling

  • WRR scheduling

    WRR scheduling is an extension of Round Robin (RR) scheduling. Packets in each queue are scheduled in a polling manner based on the queue weight. RR scheduling equals WRR scheduling with the weight being 1.

    Figure 4-4 shows WRR scheduling.

    Figure 4-4  WRR scheduling

    In WRR scheduling, the device schedules packets in queues in a polling manner round by round based on the queue weight. After one round of scheduling, the weights of all queues are decreased by 1. The queue whose weight is decreased to 0 cannot be scheduled. When the weights of all the queues are decreased to 0, the next round of scheduling starts. For example, the weights of eight queues on an interface are set to 4, 2, 5, 3, 6, 4, 2, and 1. Table 4-1 lists the WRR scheduling results.

    Table 4-1  WRR scheduling results

    Queue Index

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue Weight

    4

    2

    5

    3

    6

    4

    2

    1

    Queue in the first round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue in the second round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    -

    Queue in the third round of scheduling

    Queue 7

    -

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the fourth round of scheduling

    Queue 7

    -

    Queue 5

    -

    Queue 3

    Queue 2

    - -

    Queue in the fifth round of scheduling

    - -

    Queue 5

    -

    Queue 3

    - - -

    Queue in the sixth round of scheduling

    - - - -

    Queue 3

    - - -

    Queue in the seventh round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue in the eighth round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    -

    Queue in the ninth round of scheduling

    Queue 7

    -

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the tenth round of scheduling

    Queue 7

    - -

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the eleventh round of scheduling

    - -

    Queue 5

    -

    Queue 3

    - - -

    Queue in the twelfth round of scheduling

    - - - -

    Queue 3

    - - -

    The statistics show that the number of times packets are scheduled in each queue corresponds to the queue weight. A higher queue weight indicates a greater number of times packets in the queue are scheduled. The unit for WRR scheduling is packet; therefore, there is no fixed bandwidth for each queue. If packets are scheduled fairly, large-sized packets obtain more bandwidth than small-sized packets.

    WRR scheduling offsets the disadvantage of PQ scheduling in which packets in queues with lower priories may be not processed for a long period of time. In addition, WRR can dynamically change the time of scheduling packets in queues. For example, if a queue is empty, WRR scheduling ignores this queue and starts to schedule the next queue. This ensures bandwidth usage. WRR scheduling, however, cannot schedule short-delay services in time.

  • DRR scheduling

    DRR is also based on RR. DRR solves the WRR problem. In WRR scheduling, a large-sized packet obtains less bandwidth than a small-sized packet. DRR schedules packets considering the packet length, ensuring that packets are scheduled equally.

    Deficit indicates the bandwidth deficit of each queue. The initial value is 0. The system allocates bandwidth to each queue based on the weight and calculates the deficit. If the deficit of a queue is greater than 0, the queue participates in scheduling. The device sends a packet and calculates the deficit based on the length of the sent packet. If the deficit of a queue is smaller than 0, the queue does not participate in scheduling. The current deficit is used as the basis for the next round of scheduling.

    Figure 4-5  Queue weights

    In Figure 4-5, the weights of Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 are set to 40, 30, 20, 10, 40, 30, 20, and 10 respectively. During scheduling, Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 obtain 20%, 15%, 10%, 5%, 20%, 15%, 10%, and 5% of the bandwidth respectively. Q7 and Q6 are used as examples to describe DRR scheduling. Assume that Q7 obtains 400 bytes/s bandwidth and Q6 obtains 300 bytes/s bandwidth.

    • First round of scheduling

      Deficit[7][1] = 0+400 = 400

      Deficit[6][1] = 0+300 = 300

      After packet of 900 bytes in Q7 and packet of 400 bytes in Q6 are sent, the values are as follows:

      Deficit[7][1] = 400-900 =-500

      Deficit[6][1] = 300-400 =-100

    • Second round of scheduling

      Deficit [7][2] = -500 + 400 = -100

      Deficit [6][2] = -100 + 300 = 200

      Packet in Q7 is not scheduled because the deficit of Q7 is negative. Packets of 300 bytes in Q6 are sent. The value is as follows:

      Deficit [6][2] = 200-300 =-100

    • Third round of scheduling

      Deficit[7][3] = -100+400 = 300

      Deficit[6][3] = -100+300 = 200

      Packet of 600 bytes in Q7 and packet of 500 bytes in Q6 are sent, the values are as follows:

      Deficit[7][3] = 300-600 =-300

      Deficit[6][3] = 200-500 =-300

      Such a process is repeated and finally Q7 and Q6 respectively obtain 20% and 15% of the bandwidth. This illustrates that you can obtain the required bandwidth by setting the weights.

    In DRR scheduling, short-delay services still cannot be scheduled in time.

  • WFQ scheduling

    Fair Queuing (FQ) equally allocates network resources so that the delay and jitter of all flows are minimized.
    • Packets in different queues are scheduled fairly. The delays of all flows have slight difference.
    • Packets with different sizes are scheduled fairly. If many large and small packets in different queues need to be sent, small packets are scheduled first so that the total packet jitter of each flow is reduced.

    Compared with FQ, WFQ schedules packets based on priorities. WFQ schedules packets with higher priorities before packets with lower priorities.

    Before packets enter queues, WFQ classifies the packets based on:
    • Session information

      WFQ classifies flows based on the session information including the protocol type, source and destination TCP or User Datagram Protocol (UDP) port numbers, source and destination IP addresses, and precedence field in the ToS field. Additionally, the system provides a large number of queues and equally places flows into queues to smooth out the delay. When flows leave queues, WFQ allocates the bandwidth on the outbound interface for each flow based on the precedence of each flow. Flows with the lowest priorities obtain the least bandwidth. Only the packets matching the default traffic classifier in Class-based queueing (CBQ) can be classified based on session information.

    • Priority

      The priority mapping technique marks local priorities for traffic and each local priority maps a queue number. Each interface is allocated eight queues and packets enter queues. By default, queue weights are the same and traffic equally shares the interface bandwidth. Users can change weights so that high-priority and low-priority packets are allocated bandwidth based on weight percentage.

    Figure 4-6  WFQ scheduling

  • PQ+WRR scheduling

    PQ scheduling and WRR scheduling have advantages and disadvantages. To offset disadvantages of PQ scheduling or DRR scheduling, use PQ+WRR scheduling. Packets from queues with lower priorities can obtain the bandwidth by WRR scheduling and short-delay services can be scheduled first by PQ scheduling.

    On the device, you can set WRR parameters for queues. The eight queues on each interface are classified into two groups. One group includes queue 7, queue 6, and Queue 5, and is scheduled in PQ mode; the other group includes queue 4, queue 3, queue 2, queue 1, and queue 0, and is scheduled in WRR mode. Only LAN-side interfaces on the device support PQ+WRR scheduling. Figure 4-7 shows PQ+WRR scheduling.

    Figure 4-7  PQ+WRR scheduling

    During scheduling, the device first schedules traffic in queue 7, queue 6, and queue 5 in PQ mode. The device schedules traffic in other queues in WRR mode only after the traffic in queue 7, queue 6, and queue 5 are scheduled. Queue 4, queue 3, queue 2, queue 1, and queue 0 have their own weights. Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using WRR scheduling.

  • PQ+DRR scheduling

    NOTE:
    LAN interfaces support PQ+DRR scheduling.

    Similar to PQ+WRR, PQ+DRR scheduling offsets disadvantages of PQ scheduling and DRR scheduling. If only PQ scheduling is used, packets in queues with lower priorities cannot obtain bandwidth for a long period of time. If only DRR scheduling is used, short-delay services such as voice services cannot be scheduled first. PQ+DRR scheduling has advantages of both PQ and DRR scheduling and offsets their disadvantages.

    Eight queues on the device interface are classified into two groups. You can specify PQ scheduling for certain groups and DRR scheduling for other groups.

    Figure 4-8  PQ+DRR scheduling

    As shown in Figure 4-8, the device first schedules traffic in queues 7, 6, and 5 in PQ mode. After traffic scheduling in queues 7, 6, and 5 is complete, the device schedules traffic in queues 4, 3, 2, 1, and 0 in DRR mode. Queues 4, 3, 2, 1, and 0 have their own weight.

    Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using DRR scheduling.

  • PQ+WFQ scheduling

    Similar to PQ+WRR, PQ+WFQ scheduling has advantages of PQ scheduling and WFQ scheduling and offsets their disadvantages. If only PQ scheduling is used, packets in queues with lower priorities cannot obtain bandwidth for a long period of time. If only WFQ scheduling is used, short-delay services such as voice services cannot be scheduled first. To solve the problem, configure PQ+WFQ scheduling.

    Eight queues on the device interface are classified into two groups. You can specify PQ scheduling for certain groups and WFQ scheduling for other groups.

    WAN-side interfaces and layer 2 VE interfaces support PQ+WFQ scheduling.
    Figure 4-9  PQ+WFQ scheduling

    As shown in Figure 4-9, the device first schedules traffic in queue 7, queue 6, and queue 5 in PQ mode. After traffic scheduling in queues 7, 6, and 5 is complete, the device schedules traffic in queues 4, 3, 2, 1, and 0 in WFQ mode. Queues 4, 3, 2, 1, and 0 have their own weights.

    Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using WFQ scheduling.

  • CBQ scheduling

    Class-based queueing (CBQ) is an extension of WFQ and matches packets with traffic classifiers. CBQ classifies packets based on the IP precedence or DSCP priority, inbound interface, or 5-tuple (protocol type, source IP address and mask, destination IP address and mask, source port range, and destination port range). Then CBQ puts packets into different queues. If packets do not match any configured traffic classifiers, CBQ matches packets with the default traffic classifier.

    Figure 4-10  CBQ scheduling

    As shown in Figure 4-10, CBQ provides the following types of queues:
    • Expedited Forwarding (EF) queues are applied to short-delay services.

    • Assured Forwarding (AF) queues are applied to key data services that require assured bandwidth.

    • Best-Effort (BE) queues are applied to best-effort services that require no strict QoS assurance.

    • EF queue

      An EF queue has the highest priority. You can put one or more types of packets into EF queues and set different bandwidth for different types of packets.

      During packet scheduling, packets in EF queues are sent first. When congestion occurs, packets in EF queues are sent first. To ensure that packets in AF and BE queues are scheduled, packets in EF queues are sent at the configured rate limit. When no congestion occurs, EF queues can use available bandwidth of AF and BE queues. The EF queues can be allocated available bandwidth but cannot occupy additional bandwidth. This protects the bandwidth available to other packets.

      In addition to common EF queues, the device provides a special EF queue, LLQ queue. In contrast to other queues, LLQ queues provide lower delay. LLQ provides good QoS assurance for delay-sensitive services such as VoIP services.

    • AF queue

      Each AF queue corresponds to one type of packets. You can set bandwidth for each type of packets. During scheduling, the system sends packets based on the configured bandwidth. AF implements fair scheduling. If an interface has remaining bandwidth, packets in AF queues obtain the remaining bandwidth based on weights.

      If the length of an AF queue reaches the maximum value, the tail drop method is used by default. You can choose to use WRED.

    • BE queue

      If packets do not match any configured traffic classifiers, packets match the default traffic classifier defined by the system. You are allowed to configure AF queues and bandwidth for the default traffic classifier, whereas BE queues are configured in most situations. BE uses WFQ scheduling so that the system schedules packets matching the default traffic classifier based on flows.

      If the length of a BE queue reaches the maximum value, the tail drop method is used by default. You can choose to use WRED.

Translation
Download
Updated: 2019-05-17

Document ID: EDOC1000174115

Views: 42139

Downloads: 30

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next