No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

CX11x, CX31x, CX710 (Earlier Than V6.03), and CX91x Series Switch Modules V100R001C10 Configuration Guide 12

The documents describe the configuration of various services supported by the CX11x&CX31x&CX91x series switch modules The description covers configuration examples and function configurations.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Principles

Principles

This section describes the principles of congestion management and congestion avoidance.

Congestion Avoidance

Congestion avoidance is a mechanism used to control service flows. A system configured with congestion avoidance monitors network resource usage such as queues and memory buffers. When congestion occurs or aggravates, the system starts to discard packets.

Congestion avoidance uses tail drop and WRED to discard packets.

  • Traditional tail drop policy

    The traditional packet drop policy uses the tail drop method. When the length of a queue reaches the maximum value, all the packets last added to the queue (at the tail of the queue) are discarded.

    This packet drop policy may cause global TCP synchronization. As a result, TCP connections cannot be set up. The three colors represent three TCP connections. When packets from multiple TCP connections are discarded, these TCP connections enter the congestion avoidance and slow start state. Traffic reduces, and then reaches the peak. The volume of traffic varies greatly.

    Figure 13-23 Tail drop policy

  • WRED

    To avoid global TCP synchronization, Random Early Detection (RED) is used. The RED mechanism randomly discards packets so that the transmission speed of multiple TCP connections is not reduced simultaneously. In this manner, global TCP synchronization is prevented. The rate of TCP traffic and network traffic become stable.

    Figure 13-24 RED

    The device provides Weighted Random Early Detection (WRED) based on RED technology. When a drop profile is applied to an interface or interface queue, packets are discarded according to the drop profile. The drop profile defines the upper drop threshold, lower drop threshold, and drop probability. When the length of a queue is smaller than the lower drop threshold, no packets are discarded. When the length of a queue exceeds the upper drop threshold, all new packets in the queue are discarded. When the length of a queue is between the upper drop threshold and the lower drop threshold, new packets are discarded randomly. A longer queue means higher drop probability, but the drop probability has a maximum value.

Congestion Management

As increasing network services are emerging and people are demanding higher network quality, limited bandwidth cannot meet network requirements. As a result, the delay and signal loss occur because of congestion. When a network is congested intermittently and delay-sensitive services require higher QoS than delay-insensitive services, congestion management is required. If congestion persists on the network after congestion management is configured, the bandwidth needs to be increased. Congestion management implements queuing and scheduling when sending packet flows.

A device interface supports PQ, DRR, PQ+DRR, WRR, and PQ+WRR for congestion management.

On the device, there are eight queues on each interface in the outbound direction, which are identified by index numbers. The index numbers range from 0 to 7. Based on the mappings between local priorities and queues, the device sends the classified packets to queues, and then schedules the packets using queue scheduling mechanisms.

  • PQ scheduling

    PQ scheduling is designed for core services, and is applied to the queues in descending order of priorities. Queues with lower priories are processed only after all the queues with higher priorities are empty. In PQ scheduling, packets of core services are placed into a queue of a higher priority, and packets of non-core services such as email services are placed into a queue of a lower priority. Core services are processed first, and non-core services are sent at intervals when core services are not processed.

    As shown in Figure 13-25, the priorities of queues 7 to 0 are in descending order of priorities. The packets in queue 7 are processed first. The scheduler processes packets in queue 6 only after queue 7 becomes empty. The packets in queue 6 are sent at the link rate when packets in queue 6 need to be sent and queue 7 is empty. The packets in queue 5 are sent at the link rate when queue 6 and queue 7 are empty, and so on.

    PQ scheduling is valid for short-delay services. Assume that data flow X is mapped to the queue of the highest priority on each node. When packets of data flow X reach a node, the packets are processed first.

    The PQ scheduling mechanism, however, may result in starvation of packets in queues with lower priorities. For example, if data flows mapped to queue 7 arrive at 100% link rate in a period, the scheduler does not process flows in queue 6 and queues 0 to 5.

    To prevent starvation of packets in some queues, upstream devices need to accurately define service characteristics of data flows so that service flows mapped to queue 7 do not exceed a certain percentage of the link capacity. By doing this, queue 7 is not full and the scheduler can process packets in queues with lower priorities.

Figure 13-25 PQ scheduling

  • WRR scheduling

    Weight Round Robin (WRR) scheduling is an extension of Round Robin (RR) scheduling. Packets in each queue are scheduled in a polling manner based on the queue weight. RR scheduling equals WRR scheduling with the weight being 1.

    Figure 13-26 shows WRR scheduling.

    Figure 13-26 WRR scheduling

    In WRR scheduling, the device schedules packets in queues in a polling manner round by round based on the queue weight. After one round of scheduling, the weights of all queues are decreased by 1. The queue whose weight is decreased to 0 cannot be scheduled. When the weights of all the queues are decreased to 0, the next round of scheduling starts. For example, the weights of eight queues on an interface are set to 4, 2, 5, 3, 6, 4, 2, and 1. Table 13-14 lists the WRR scheduling results.

    Table 13-14 WRR scheduling results

    Queue Index

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue Weight

    4

    2

    5

    3

    6

    4

    2

    1

    Queue in the first round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue in the second round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    -

    Queue in the third round of scheduling

    Queue 7

    -

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the fourth round of scheduling

    Queue 7

    -

    Queue 5

    -

    Queue 3

    Queue 2

    - -

    Queue in the fifth round of scheduling

    - -

    Queue 5

    -

    Queue 3

    - - -

    Queue in the sixth round of scheduling

    - - - -

    Queue 3

    - - -

    Queue in the seventh round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    Queue 0

    Queue in the eighth round of scheduling

    Queue 7

    Queue 6

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    Queue 1

    -

    Queue in the ninth round of scheduling

    Queue 7

    -

    Queue 5

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the tenth round of scheduling

    Queue 7

    - -

    Queue 4

    Queue 3

    Queue 2

    - -

    Queue in the eleventh round of scheduling

    - -

    Queue 5

    -

    Queue 3

    - - -

    Queue in the twelfth round of scheduling

    - - - -

    Queue 3

    - - -

    The statistics show that the number of times packets are scheduled in each queue corresponds to the queue weight. A higher queue weight indicates a greater number of times packets in the queue are scheduled. The unit for WRR scheduling is packet; therefore, there is no fixed bandwidth for each queue. If packets are scheduled fairly, large-sized packets obtain more bandwidth than small-sized packets.

    WRR scheduling offsets the disadvantage of PQ scheduling in which packets in queues with lower priories may be not processed for a long period of time. In addition, WRR can dynamically change the time of scheduling packets in queues. For example, if a queue is empty, WRR scheduling ignores this queue and starts to schedule the next queue. This ensures bandwidth usage. WRR scheduling, however, cannot schedule short-delay services in time.

  • DRR scheduling

    Deficit Round Robin (DRR) is also based on RR. DRR solves the WRR problem. In WRR scheduling, a large-sized packet obtains less bandwidth than a small-sized packet. DRR schedules packets considering the packet length, ensuring that packets are scheduled equally.

    Deficit indicates the bandwidth deficit of each queue. The initial value is 0. The system allocates bandwidth to each queue based on the weight and calculates the deficit. If the deficit of a queue is greater than 0, the queue participates in scheduling. The device sends a packet and calculates the deficit based on the length of the sent packet. If the deficit of a queue is smaller than 0, the queue does not participate in scheduling. The current deficit is used as the basis for the next round of scheduling.

    Figure 13-27 Queue weights

    In Figure 13-27, the weights of Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 are set to 40, 30, 20, 10, 40, 30, 20, and 10 respectively. During scheduling, Q7, Q6, Q5, Q4, Q3, Q2, Q1, and Q0 obtain 20%, 15%, 10%, 5%, 20%, 15%, 10%, and 5% of the bandwidth respectively. Q7 and Q6 are used as examples to describe DRR scheduling. Assume that Q7 obtains 400 bytes/s bandwidth and Q6 obtains 300 bytes/s bandwidth.

    • First round of scheduling

      Deficit[7][1] = 0+400 = 400

      Deficit[6][1] = 0+300 = 300

      After packet of 900 bytes in Q7 and packet of 400 bytes in Q6 are sent, the values are as follows:

      Deficit[7][1] = 400-900 =-500

      Deficit[6][1] = 300-400 =-100

    • Second round of scheduling

      Deficit [7][2] = -500 + 400 = -100

      Deficit [6][2] = -100 + 300 = 200

      Packet in Q7 is not scheduled because the deficit of Q7 is negative. Packet of 300 bytes in Q6 are sent, the value is as follows:

      Deficit [6][2] = 200-300 =-100

    • Third round of scheduling

      Deficit[7][3] = -100+400 = 300

      Deficit[6][3] = -100+300 = 200

      Packet of 600 bytes in Q7 and packet of 400 bytes in Q6 are sent, the values are as follows:

      Deficit[7][3] = 300-600 =-300

      Deficit[6][3] = 200-500 =-300

      Such a process is repeated and finally Q7 and Q6 respectively obtain 20% and 15% of the bandwidth. This illustrates that you can obtain the required bandwidth by setting the weights.

    In DRR scheduling, short-delay services still cannot be scheduled in time.

  • PQ+WRR scheduling

    PQ scheduling and WRR scheduling have advantages and disadvantages. To offset disadvantages of PQ scheduling or DRR scheduling, use PQ+WRR scheduling. Packets from queues with lower priorities can obtain the bandwidth by WRR scheduling and short-delay services can be scheduled first by PQ scheduling.

    On the device, you can set WRR parameters for queues. The eight queues on each interface are classified into two groups. One group includes queue 7, queue 6, and Queue 5, and is scheduled in PQ mode; the other group includes queue 4, queue 3, queue 2, queue 1, and queue 0, and is scheduled in WRR mode. Figure 13-28 shows PQ+WRR scheduling.

    Figure 13-28 PQ+WRR scheduling

    During scheduling, the device first schedules traffic in queue 7, queue 6, and queue 5 in PQ mode. The device schedules traffic in other queues in WRR mode only after the traffic in queue 7, queue 6, and queue 5 are scheduled. Queue 4, queue 3, queue 2, queue 1, and queue 0 have their own weights. Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using WRR scheduling.

  • PQ+DRR scheduling

    Similar to PQ+WRR, PQ+DRR scheduling offsets disadvantages of PQ scheduling and DRR scheduling. If only PQ scheduling is used, packets in queues with lower priorities cannot obtain bandwidth for a long period of time. If only DRR scheduling is used, short-delay services such as voice services cannot be scheduled first. PQ+DRR scheduling has advantages of both PQ and DRR scheduling and offsets their disadvantages.

    Eight queues on the device interface are classified into two groups. You can specify PQ scheduling for certain groups and DRR scheduling for other groups.

    Figure 13-29 PQ+DRR scheduling

    As shown in Figure 13-29, the device first schedules traffic in queues 7, 6, and 5 in PQ mode. After traffic scheduling in queues 7, 6, and 5 is complete, the device schedules traffic in queues 4, 3, 2, 1, and 0 in DRR mode. Queues 4, 3, 2, 1, and 0 have their own weight.

    Important protocol packets or short-delay service packets must be placed in queues using PQ scheduling so that they can be scheduled first. Other packets are placed in queues using DRR scheduling.

Translation
Download
Updated: 2019-08-09

Document ID: EDOC1000041694

Views: 59679

Downloads: 3623

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next