No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Feature Description - QoS 01

NE05E and NE08E V300R003C10SPC500

This is NE05E and NE08E V300R003C10SPC500 Feature Description - QoS
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Queues and Congestion Management

Queues and Congestion Management

Congestion management defines a policy that determines the order in which packets are forwarded and specifies drop principles for packets. The queuing technology is used.

The queuing technology orders packets in the buffer. When the packet rate exceeds the interface bandwidth or the bandwidth allocated to the queue that buffers packets, the packets are buffered in queues and wait to be forwarded. The queue scheduling algorithm determines the order in which packets are leaving a queue and the relationships between queues.

NOTE:

The Traffic Manager (TM) on the forwarding plane houses high-speed buffers, for which all interfaces have to compete. To prevent traffic interruptions due to long-time loss in the buffer battle, the system allocates a small buffer to each interface and ensures that each queue on each interface can use the buffer.

The TM puts received packets into the buffer and allows these packets to be forwarded in time when traffic is not congested. In this case, the period during which packets are stored in the buffer is at μs level, and the delay can be ignored.

When traffic is congested, packets accumulate in the buffer and wait to be forwarded. The delay greatly prolongs. The delay is determined by the buffer size for a queue and the output bandwidth allocated to a queue. The format is as follows:

Delay of a queue = Buffer size for the queue/Output bandwidth for the queue

Each interface on a NE stores eight downstream queues, which are called class queues (CQs) or port queues. The eight queues are BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7.

The first in first out (FIFO) mechanism is used to transfer packets in a queue. Resources used to forward packets are allocated based on the arrival order of packets.

Figure 7-4 Entering and leaving a queue

Scheduling Algorithms

The commonly used scheduling algorithms are as follows:

  • First In First Out (FIFO)
  • Strict Priority (SP)
  • Weighted Fair Queuing (WFQ)

FIFO

FIFO does not need traffic classification. As shown in Figure 7-4, FIFO allows the packets that come earlier to enter the queue first. On the exit of a queue, FIFO allows the packets to leave the queue in the same order as that in which the packets enter the queue.

SP

SP schedules packets strictly based on queue priorities. Packets in queues with a low priority can be scheduled only after all packets in queues with a high priority have been scheduled.

As shown in Figure 7-5, three queues with a high, medium, and low priority respectively are configured with SP scheduling. The number indicates the order in which packets arrive.

Figure 7-5 SP scheduling

When packets leave queues, the device forwards the packets in the descending order of priorities. Packets in the higher-priority queue are forwarded preferentially. If packets in the higher-priority queue come in between packets in the lower-priority queue that is being scheduled, the packets in the high-priority queue are still scheduled preferentially. This implementation ensures that packets in the higher-priority queue are always forwarded preferentially. As long as there are packets in the high queue no other queue will be served.

The disadvantage of SP is that the packets in lower-priority queues are not processed until all the higher-priority queues are empty. As a result, a congested higher-priority queue causes all lower-priority queues to starve.

WFQ

WFQ allocates bandwidths to flows based on the weight. In addition, to allocate bandwidths fairly to flows, WFQ schedules packets in bits. Figure 7-6 shows how bit-by-bit scheduling works.

Figure 7-6 Bit-by-bit scheduling

The bit-by-bit scheduling mode shown in Figure 7-6 allows the device to allocate bandwidths to flows based on the weight. This prevents long packets from preempting bandwidths of short packets and reduces the delay and jitter when both short and long packets wait to be forwarded.

The bit-by-bit scheduling mode, however, is an ideal one. A NEperforms the WFQ scheduling based on a certain granularity, such as 256 B and 1 KB. Different boards support different granularities.

Advantages of WFQ:

  • Different queues obtain the scheduling chances fairly, balancing delays of flows.
  • Short and long packets obtain the scheduling chances fairly. If both short and long packets wait in queues to be forwarded, short packets are scheduled preferentially, reducing jitters of flows.
  • The lower the weight of a flow is, the lower the bandwidth the flow obtains.

Port Queue Scheduling

You can configure SP scheduling or weight-based scheduling for eight queues on each interface of a NE. Eight queues can be classified into three groups, priority queuing (PQ) queues, WFQ queues, and low priority queuing (LPQ) queues, based on scheduling algorithms.

  • PQ queue

    SP scheduling applies to PQ queues. Packets in high-priority queues are scheduled preferentially. Therefore, services that are sensitive to delays (such as VoIP) can be configured with high priorities.

    In PQ queues, however, if the bandwidth of high-priority packets is not restricted, low-priority packets cannot obtain bandwidth and are starved out.

    Configuring eight queues on an interface to be PQ queues is allowed but not recommended. Generally, services that are sensitive to delays are put into PQ queues.

  • WFQ queue

    Weight-based scheduling, such as WRR, DWRR, and WFQ, applies to WFQ queues.

  • LPQ queue

    LPQ queue is implemented on a high-speed interface (such as an Ethernet interface).

    In the actual application, best effort (BE) flows can be put into the LPQ queue. When the network is overloaded, BE flows can be limited so that other services can be processed preferentially.

WFQ, PQ, and LPQ can be used separately or jointly for eight queues on an interface.

Scheduling Order

SP scheduling is implemented between PQ, WFQ, and LPQ queues. PQ queues are scheduled preferentially, and then WFQ queues and LPQ queues are scheduled in sequence, as shown in Figure 7-7. Figure 7-8 shows the detailed process.

Figure 7-7 Port queue scheduling order

Figure 7-8 Port queue scheduling process

  • Packets in PQ queues are preferentially scheduled, and packets in WFQ queues are scheduled only when no packets are buffered in PQ queues.
  • When all PQ queues are empty, WFQ queues start to be scheduled. If packets are added to PQ queues afterward, packets in PQ queues are still scheduled preferentially.
  • Packets in LPQ queues start to be scheduled only after all PQ and WFQ queues are empty.

Bandwidths are preferentially allocated to PQ queues to guarantee the peak information rate (PIR) of packets in PQ queues. The remaining bandwidth is allocated to WFQ queues based on the weight. If the bandwidth is not fully used, the remaining bandwidth is allocated to WFQ queues whose PIRs are higher than the obtained bandwidth until the PIRs of all WFQ queues are guaranteed. If any bandwidth is remaining at this time, the bandwidth resources are allocated to LPQ queues.

Bandwidth Allocation Example 1

In this example, the traffic shaping rate is set to 100 Mbit/s on an interface (by default, the traffic shaping rate is the interface bandwidth). The input bandwidth and PIR of each service are configured as follows.

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s)
CS7 PQ 65 M 55 M
CS6 PQ 30 M 30 M
EF WFQ with the weight 5 10 M 5 M
AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 25 M
AF1 WFQ with the weight 1 20 M 20 M
BE LPQ 100 M Not configured

The bandwidth is allocated as follows:

  • PQ scheduling is performed first. The 100 Mbit/s bandwidth is allocated to the CS7 queue first. The output bandwidth of CS7 equals the minimum rate of the traffic shaping rate (100 Mbit/s), input bandwidth of CS7 (65 Mbit/s), and PIR of CS7 (55 Mbit/s), that is, 55 Mbit/s. The remaining bandwidth 45 Mbit/s is allocated to the CS6 queue. The output bandwidth of CS6 equals the minimum rate of the traffic shaping rate (45 Mbit/s), input bandwidth of CS6 (30 Mbit/s), and PIR of CS6 (30 Mbit/s), that is, 30 Mbit/s. After PQ scheduling, the remaining bandwidth is 15 Mbit/s (100 Mbit/s - 55 Mbit/s - 30 Mbit/s).
  • Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining bandwidth x Weight of this queue/Sum of weights = 15 Mbit/s x Weight/15.
    • Bandwidth allocated to the EF queue = 15 Mbit/s x 5/15 = 5 Mbit/s = PIR. The bandwidth allocated to the EF queue is fully used.
    • Bandwidth allocated to the AF4 queue = 15 Mbit/s x 4/15 = 4 Mbit/s < PIR. The bandwidth allocated to the AF4 queue is exhausted.
    • Bandwidth allocated to the AF3 queue = 15 Mbit/s x 3/15 = 3 Mbit/s < PIR. The bandwidth allocated to the AF3 queue is exhausted.
    • Bandwidth allocated to the AF2 queue = 15 Mbit/s x 2/15 = 2 Mbit/s < PIR. The bandwidth allocated to the AF2 queue is exhausted.
    • Bandwidth allocated to the AF1 queue = 15 Mbit/s x 1/15 = 1 Mbit/s < PIR. The bandwidth allocated to the AF1 queue is exhausted.
  • The bandwidth is exhausted, and BE packets are not scheduled. The output BE bandwidth is 0.

The output bandwidth of each queue is as follows:

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s) Output Bandwidth (bit/s)
CS7 PQ 65 M 55 M 55 M
CS6 PQ 30 M 30 M 30 M
EF WFQ with the weight 5 10 M 5 M 5 M
AF4 WFQ with the weight 4 10 M 10 M 4 M
AF3 WFQ with the weight 3 10 M 15 M 3 M
AF2 WFQ with the weight 2 20 M 25 M 2 M
AF1 WFQ with the weight 1 20 M 20 M 1 M
BE LPQ 100 M Not configured 0

Bandwidth Allocation Example 2

In this example, the traffic shaping rate is set to 100 Mbit/s on an interface. The input bandwidth and PIR of each service are configured as follows.

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s)
CS7 PQ 15 M 25 M
CS6 PQ 30 M 10 M
EF WFQ with the weight 5 90 M 100 M
AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 25 M
AF1 WFQ with the weight 1 20 M 20 M
BE LPQ 100 M Not configured

The bandwidth is allocated as follows:

  • Packets in the PQ queue are scheduled preferentially to ensure the PIR of the PQ queue. After PQ scheduling, the remaining bandwidth is 75 Mbit/s (100 Mbit/s - 15 Mbit/s - 10 Mbit/s).
  • Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining bandwidth x Weight of this queue/Sum of weights = 75 Mbit/s x Weight/15.
    • Bandwidth allocated to the EF queue = 75 Mbit/s x 5/15 = 25 Mbit/s < PIR. The bandwidth allocated to the EF queue is fully used.
    • Bandwidth allocated to the AF4 queue = 75 Mbit/s x 4/15 = 20 Mbit/s > PIR. The AF4 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining bandwidth is 10 Mbit/s.
    • Bandwidth allocated to the AF3 queue = 75 Mbit/s x 3/15 = 15 Mbit/s = PIR. The AF3 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining bandwidth is 5 Mbit/s.
    • Bandwidth allocated to the AF2 queue = 75 Mbit/s x 2/15 = 10 Mbit/s < PIR. The bandwidth allocated to the AF2 queue is exhausted.
    • Bandwidth allocated to the AF1 queue = 75 Mbit/s x 1/15 = 5 Mbit/s < PIR. The bandwidth allocated to the AF1 queue is exhausted.
  • The remaining bandwidth is 15 Mbit/s, which is allocated to the queues, whose PIRs are higher than the obtained bandwidth, based on the weight.
    • Bandwidth allocated to the EF queue = 15 Mbit/s x 5/8 = 9.375 Mbit/s. The sum of bandwidths allocated to the EF queue is 34.375 Mbit/s, which is also lower than the PIR. Therefore, the bandwidth allocated to the EF queue is exhausted.
    • Bandwidth allocated to the AF2 queue = 15 Mbit/s x 2/8 = 3.75 Mbit/s. The sum of bandwidths allocated to the AF2 queue is 13.75 Mbit/s, which is also lower than the PIR. Therefore, the bandwidth allocated to the AF2 queue is exhausted.
    • Bandwidth allocated to the AF1 queue = 15 Mbit/s x 1/8 = 1.875 Mbit/s. The sum of bandwidths allocated to the AF1 queue is 6.875 Mbit/s, which is also lower than the PIR. Therefore, the bandwidth allocated to the AF1 queue is exhausted.
  • The bandwidth is exhausted, and the BE queue is not scheduled. The output BE bandwidth is 0.

The output bandwidth of each queue is as follows:

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s) Output Bandwidth (bit/s)
CS7 PQ 15 M 25 M 15 M
CS6 PQ 30 M 10 M 10 M
EF WFQ with the weight 5 90 M 100 M 34.375 M
AF4 WFQ with the weight 4 10 M 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M 10 M
AF2 WFQ with the weight 2 20 M 25 M 13.75 M
AF1 WFQ with the weight 1 20 M 20 M 6.875 M
BE LPQ 100 M Not configured 0

Bandwidth Allocation Example 3

In this example, the traffic shaping rate is set to 100 Mbit/s on an interface. The input bandwidth and PIR of each service are configured as follows.

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s)
CS7 PQ 15 M 25 M
CS6 PQ 30 M 10 M
EF WFQ with the weight 5 90 M 10 M
AF4 WFQ with the weight 4 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M
AF2 WFQ with the weight 2 20 M 10 M
AF1 WFQ with the weight 1 20 M 10 M
BE LPQ 100 M Not configured

The bandwidth is allocated as follows:

  • Packets in the PQ queue are scheduled preferentially to ensure the PIR of the PQ queue. After PQ scheduling, the remaining bandwidth is 75 Mbit/s (100 Mbit/s - 15 Mbit/s - 10 Mbit/s).
  • Then the first round of WFQ scheduling starts. The remaining bandwidth after PQ scheduling is allocated to WFQ queues. The bandwidth allocated to a WFQ queue is calculated based on this format: Bandwidth allocated to a WFQ queue = Remaining bandwidth x weight of this queue/sum of weights = 75 Mbit/s x weight/15.
    • Bandwidth allocated to the EF queue = 75 Mbit/s x 5/15 = 25 Mbit/s > PIR. The EF queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining bandwidth is 15 Mbit/s.
    • Bandwidth allocated to the AF4 queue = 75 Mbit/s x 4/15 = 20 Mbit/s > PIR. The AF4 queue actually obtains the bandwidth 10 Mbit/s (PIR). The remaining bandwidth is 10 Mbit/s.
    • Bandwidth allocated to the AF3 queue = 75 Mbit/s x 3/15 = 15 Mbit/s = PIR. The AF3 queue actually obtains the bandwidth 10 Mbit/s. The remaining bandwidth is 5 Mbit/s.
    • Bandwidth allocated to the AF2 queue = 75 Mbit/s x 2/15 = 10 Mbit/s = PIR. The bandwidth allocated to the AF2 queue is exhausted.
    • Bandwidth allocated to the AF1 queue = 75 Mbit/s x 1/15 = 5 Mbit/s < PIR. The bandwidth allocated to the AF1 queue is exhausted.
  • The remaining bandwidth is 30 Mbit/s, which is allocated to the AF1 queue, whose PIRs are higher than the obtained bandwidth, based on the weight. Therefore, the bandwidth allocated to the AF1 queue is 5 Mbit/s.
  • The remaining bandwidth is 25 Mbit/s, which is allocated to the BE queue.

The output bandwidth of each queue is as follows:

Service Class Queue Input Bandwidth (bit/s) PIR (bit/s) Output Bandwidth (bit/s)
CS7 PQ 15 M 25 M 15 M
CS6 PQ 30 M 10 M 10 M
EF WFQ with the weight 5 90 M 10 M 10 M
AF4 WFQ with the weight 4 10 M 10 M 10 M
AF3 WFQ with the weight 3 10 M 15 M 10 M
AF2 WFQ with the weight 2 20 M 10 M 10 M
AF1 WFQ with the weight 1 20 M 10 M 10 M
BE LPQ 100 M Not configured 25 M
Translation
Download
Updated: 2019-01-14

Document ID: EDOC1100058936

Views: 3532

Downloads: 17

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next