No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Fat AP and Cloud AP V200R008C00 CLI-based Configuration Guide

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Understanding WLAN QoS

Understanding WLAN QoS

WMM

Background

It is vital to understand the 802.11 link layer transport mechanism before learning about WMM.

The 802.11 MAC layer uses the coordination function to determine the data transmitting and receiving methods used between STAs in a BSS. The 802.11 MAC layer consists of two sub-layers:
  • Distributed Coordination Function (DCF): uses the carrier sense multiple access with collision avoidance (CSMA/CA) mechanism. STAs compete for channels to obtain the authority to transmit data frames.
  • Point Coordination Function (PCF): uses centralized control to authorize STAs to transmit data frames in turn. This method prevents conflict.
NOTE:

In the 802.11 protocol, DCF is mandatory, and PCF is optional.

Figure 27-5 shows how CSMA/CA is implemented.
Figure 27-5  CSMA/CA working mechanism

  1. Before sending data to STA B, STA A detects the channel status. When detecting an idle channel, STA A sends a data frame after Distributed Inter-Frame Space (DIFS) times out and waits for a response from STA B. The data frame contains NAV information. After receiving the data frame, STA B updates the NAV information, indicating that the channel is busy and that data transmission will be delayed.
    NOTE:

    According to the 802.11 protocol, the receiver must return an ACK frame each time it receives a data frame.

  2. STA B receives the data frame, waits until Short Interframe Space (SIFS) times out, and sends an ACK frame to STA A. After the ACK frame is transmitted, the channel becomes idle. After the DIFS times out, the STAs use the exponential backoff algorithm to compete for channels. The STA of which the backoff counter is first reduced to 0 starts to send data frame.

Concepts

  • InterFrame Space (IFS): According to the 802.11 protocol, after sending a data frame, the STA must wait until the IFS times out to send the next data frame. The IFS length depends on the data frame type. High-priority data frames are sent earlier than low-priority data frames. There are three IFS types:
    • Short IFS (SIFS): The time interval between a data frame and its ACK frame. SIFS is used for high priority transmissions, such as ACK and CTS frame transmissions.
    • PCF IFS (PIFS): PIFS length is SIFS plus slot time. PCF-enabled access points wait for the duration of PIFS to occupy the wireless medium. If a STA accesses a channel when the slot time starts, the other STAs in the BSS detect that the channel is busy.
    • DCF IFS (DIFS): DIFS length is PIFS plus slot time. Data frames and management frames are transmitted at the DIFS interval.
  • Contention window: backoff time. Backoff time is a multiple of slot time, and its length depends on the physical layer technology. When multiple STAs need to transmit data but detect that all channels are busy, the STAs use the backoff algorithm. The STAs wait for a random number of slot times, and then transmit data. A STA detects channel status during the slot time interval. When detecting an idle channel, the STA starts the backoff timer. If all channels become busy, the STA freezes the remaining time in the backoff timer. When a channel becomes idle, the STA waits until DIFS times out, and continues the backoff timer. When the backoff timer is reduced to 0, the STA starts to send data frames. Figure 27-6 shows the data frame transmission process.

    Figure 27-6  Backoff algorithm diagram

    1. STA C is occupying a channel to send data frames. STA D, STA E, and STA F also need to send data frames. They detect that the channel is busy and wait.
    2. After STA C finishes data frame transmission, the other STAs wait until DIFS times out. When DIFS times out, the STAs generate a random backoff time and start their backoff timers. For example, the backoff time of STA D is t1, the backoff time of STA E is t1+t3, and the backoff time of STA F is t1+t2.
    3. When t1 times out, the backoff timer of STA D is reduced to 0. STA D starts to send data frames.
    4. STA E and STA F detect that the channel is busy, so they freeze their backoff timers and wait. After STA D completes data transmission, STA E and STA F wait until DIFS times out, and continue their backoff timers.
    5. When t2 times out, the backoff timer of STA F is reduced to 0. STA F starts to send data frames.
Principles

Channel competition is based on DCF. To all STAs, the DIFS is fixed and backoff time is random. Therefore, all the STAs fairly compete for channels. WMM enhances the 802.11 protocol, changing the channel competition mode.

  • EDCA parameters

    WMM defines a set of Enhanced Distributed Channel Access (EDCA) parameters, which distinguishes high priority packets and enables these packets to preempt channels.

    WMM classifies data packets into four access categories (ACs). Table 27-7 shows the mappings between ACs and 802.11 user preferences (UPs). A large UP value indicates a high priority.
    Table 27-7  Mappings between ACs and UPs
    UP AC
    7 AC_VO (Voice)
    6
    5 AC_VI (Video)
    4
    3 AC_BE (Best Effort)
    0
    2 AC_BK (Background)
    1

    Each AC queue defines a set of EDCA parameters, which determines the capability of occupying channels. These parameters ensure that high priority ACs have a higher probability of preempting channels than low priority ones.

    Table 27-8 describes the EDCA parameters.
    Table 27-8  EDCA parameter description

    Parameter

    Meaning

    Arbitration Interframe Spacing Number (AIFSN)

    The DIFS has a fixed value. WMM provides different DIFS values for different ACs. A large AIFSN value means that the STA must wait for a long time and has a low priority.

    Exponent form of CWmin (ECWmin) and exponent form of CWmax (ECWmax)

    ECWmin specifies the minimum backoff time, and ECWmax specifies the maximum backoff time. Together, they determine the average backoff time. Large ECWmin and ECWmax values mean a long average backoff time for the STA and a low STA priority.

    Transmission Opportunity Limit (TXOPLimit)

    After preempting a channel, the STA can occupy the channel within the period of TXOPLimit. A large TXOPLimit value means that the STA can occupy the channel for a long time. If the TXOPLimit value is 0, the STA can only send one data frame every time it preempts a channel.

    As shown in Figure 27-7, the AIFSN (AIFSN[6]) and the backoff time of voice packets are shorter than those of Best Effort packets. When both voice packets and Best Effort packets need to be sent, voice packets preempt the channel.

    Figure 27-7  WMM working mechanism

  • ACK policy

    WMM defines two ACK policies: normal ACK and no ACK.

    • Normal ACK: The receiver must return an ACK frame each time it receives a unicast packet.

    • No ACK: The receiver does not need to return ACK frames after receiving packets. This mode is applicable to environments with high communication quality and little interference.

      NOTE:
      • The ACK policy is only valid to APs.
      • If communication quality is poor, the no ACK policy may cause more packets to be lost.

Priority Mapping

Packets of different types have different priorities. For example, 802.11 packets sent by STAs carry user priorities or DSCP priorities, VLAN packets on wired networks carry 802.1p priorities, and IP packets carry DSCP priorities. Priority mapping must be configured on network devices to retain the priorities of packets that traverse different networks.

Figure 27-8  Priority mapping diagram
As shown in Figure 27-8:
  1. In the upstream direction, the AP maps the user priority or DSCP priority of 802.11 packets received from STAs to the DSCP priority of tunnel packets.
  2. In the downstream direction:The AP maps the DSCP priority of the 802.3 packets to the user priority of 802.11 packets.
Precedence Field

As defined in RFC 791, the 8-bit ToS field in an IP packet header contains a 3-bit IP precedence field, as shown in Figure 27-9.

Figure 27-9  Precedence/DSCP field

Bits 0 to 2 constitute the Precedence field, representing precedence values 7, 6, 5, 4, 3, 2, 1 and 0 in descending order of priority.

Apart from the Precedence field, a ToS field also contains the following sub-fields:

  • Bit D indicates the delay. The value 0 represents a normal delay and the value 1 represents a short delay.

  • Bit T indicates the throughput. The value 0 represents normal throughput and the value 1 represents high throughput.

  • Bit R indicates the reliability. The value 0 represents normal reliability and the value 1 represents high reliability.

DSCP Field

RFC 1349 initially defined the ToS field in IP packets and added bit C. Bit C indicates the monetary cost. Later, the IETF DiffServ Working Group redefined bits 0 to 5 of a ToS field as the DSCP field in RFC 2474. In RFC 2474, the field name is changed from ToS to differentiated service (DS). Figure 27-9 shows the DSCP field in packets.

In the DS field, the first six bits (bits 0 to 5) are the DS Code Point (DSCP) and the last two bits (bits 6 and 7) are reserved. The first three bits (bits 0 to 2) are the Class Selector Code Point (CSCP), which represents the DSCP type. A DS node selects a Per-Hop Behavior (PHB) based on the DSCP value.

802.1p Field

Layer 2 devices exchange Ethernet frames. As defined in IEEE 802.1Q, the PRI field (802.1p field) in the Ethernet frame header identifies the Class of Service (CoS) requirement. Figure 27-10 shows the PRI field in Ethernet frames.

Figure 27-10  802.1p field in the VLAN frame header

The 802.1Q header contains a 3-bit PRI field, representing eight service priorities 7, 6, 5, 4, 3, 2, 1 and 0 in descending order of priority.

Traffic Policing

To limit traffic within a specified range and protect network resources, traffic policing discards excess traffic.

Traffic policing is implemented using the token bucket.

A token bucket has a specified capacity to store tokens. The system places tokens into a token bucket at the configured rate. If the token bucket is full, excess tokens overflow and no token is added.

When assessing traffic, a token bucket forwards packets based on the number of tokens in the token bucket. Only if there are enough tokens in the token bucket to forward packets is the traffic rate within the rate limit.

The working mechanisms of token buckets include single bucket at a single rate, dual buckets at a single rate, and dual buckets at dual rates.

Single Bucket at a Single Rate

If burst traffic is not allowed, one token bucket is used.

Figure 27-11  Single bucket at a single rate

In Figure 27-11, the bucket is called bucket C. Tc indicates the number of tokens within. A single bucket at a single rate uses the following parameters:
  • Committed Information Rate (CIR): indicates the rate at which tokens are put into bucket C, that is, the average traffic rate permitted by bucket C.
  • Committed burst size (CBS): indicates the capacity of bucket C, that is, maximum volume of burst traffic allowed by bucket C each time.

The system places tokens into the bucket at the CIR. If Tc is smaller than the CBS, Tc increases. If Tc is greater than or equal to the CBS, Tc remains unchanged.

B indicates the size of an arriving packet:
  • If B is smaller than or equal to Tc, the packet is colored green, and Tc decreases by B.
  • If B is greater than Tc, the packet is colored red, and Tc remains unchanged.
Dual Buckets at a Single Rate

Dual buckets at a single rate use A Single Rate Three Color Marker (srTCM) defined in RFC 2697 to assess traffic and mark packets in green, yellow, and red based on the assessment result.

Figure 27-12  Dual buckets at a single rate

As shown in Figure 27-12, the two buckets are called bucket C and bucket E. Tc indicates the number of tokens in bucket C, and Te indicates the number of tokens in bucket E. Dual buckets at a single rate use the following parameters:
  • CIR: indicates the rate at which tokens are put into bucket C, that is, average traffic rate permitted by bucket C.
  • CBS: indicates the capacity of bucket C, that is, maximum volume of burst traffic allowed by bucket C each time.
  • Excess burst size (EBS): indicates the capacity of bucket E, that is, maximum volume of excess burst traffic allowed by bucket E each time.
The system places tokens into the bucket at the CIR:
  • If Tc is smaller than the CBS, Tc increases.
  • If Tc is equal to the CBS and Te is smaller than the EBS, Te increases.
  • If Tc is equal to the CBS and Te is equal to the EBS, Tc and Te do not increase.
B indicates the size of an arriving packet:
  • If B is smaller than or equal to Tc, the packet is colored green, and Tc decreases by B.
  • If B is larger than Tc and smaller than or equal to Te, the packet is colored yellow and Te decreases by B.
  • If B is larger than Te, the packet is colored red, and Tc and Te remain unchanged.
Dual Buckets at Dual Rates

Dual buckets at dual rates use A Two Rate Three Color Marker (trTCM) defined in RFC 2698 to assess traffic and mark packets in green, yellow, and red based on the assessment result.

Figure 27-13  Dual buckets at dual rates

As shown in Figure 27-13, the two buckets are called bucket P and bucket C. Tp indicates the number of tokens in bucket P, and Tc indicates the number of tokens in bucket C. Dual buckets at dual rates use the following parameters:
  • Peak information rate (PIR): indicates the rate at which tokens are put into bucket P, that is, maximum traffic rate permitted by bucket P. The PIR must be greater than the CIR.
  • CIR: indicates the rate at which tokens are put into bucket C, that is, average traffic rate permitted by bucket C.
  • Peak burst size (PBS): indicates the capacity of bucket P, that is, maximum volume of burst traffic allowed by bucket P each time.
  • CBS: indicates the capacity of bucket C, that is, maximum volume of burst traffic allowed by bucket C each time.
The system places tokens into bucket P at the PIR and places tokens into bucket C at the CIR:
  • If Tp is smaller than the PBS, Tp increases. If Tp is larger than or equal to the PBS, Tp remains unchanged.
  • If Tc is smaller than the CBS, Tc increases. If Tc is larger than or equal to the CBS, Tp remains unchanged.
B indicates the size of an arriving packet:
  • If B is larger than Tp, the packet is colored red.
  • If B is larger than Tc and smaller than or equal to Tp, the packet is colored yellow and Tp decreases by B.
  • If B is smaller than or equal to Tc, the packet is colored green, and Tp and Tc decrease by B.
Implementation of Traffic Policing
Figure 27-14  Traffic policing components

As shown in Figure 27-14, traffic policing involves the following components:

  • Meter: measures the network traffic using the token bucket mechanism and sends the measurement result to the marker.

  • Marker: colors packets in green, yellow, or red based on the measurement result received from the meter.

  • Action: performs actions based on packet coloring results received from the marker. The following actions are defined:

    • Pass: forwards packets that meet network requirements.

    • Remark + pass: changes the local priorities of packets and forwards them.

    • Discard: drops packets that do not meet network requirements.

    By default, green and yellow packets are forwarded, while red packets are discarded.

If the rate of a type of traffic exceeds the threshold, the device reduces the packet priority. It then either forwards the packets or directly discards them, based on traffic policing configuration. By default, the packets are discarded.

Airtime Scheduling

Overview

Airtime scheduling schedules channel resources based on the channel occupation time of users connected to the same radio. Each user is assigned equal time to occupy the channel, ensuring fairness in channel usage.

On a WLAN, the physical layer rates of users differ greatly. This is due to different radio modes, supported by either the terminals or the radio environment where the terminals reside. If users with lower physical layer rates occupy wireless channels for a long period, user experience of the entire WLAN is affected. When airtime scheduling is enabled, users on the WLAN occupy the wireless channel equally. This improves the overall user experience when high- and low-speed users are connected at the same time.

Principles
After airtime scheduling is enabled, the device does the following:
  • Collects statistics on the time within which each user occupies a wireless channel to send packets on the same radio.
  • Calculates the total sum of time that each user occupies the wireless channel.
  • Sequences the STAs in ascending order of channel occupation time.
Compared with traditional scheduling modes, airtime scheduling provides the following additional functions:
  • Inserts new users to specified positions according to their wireless channel occupation time. In traditional scheduling modes, new users are placed at the end of the user queue.
  • Checks whether a user continues to send data after they finish sending the first queue of data. If yes, they are inserted into the queue according to their wireless channel occupation time. The device preferentially schedules channel resources for the user with the shortest channel occupation time. If the user does not continue to send data, the device directly schedules channel resources for the second user.
Figure 27-15 shows the airtime scheduling process.
Figure 27-15  Airtime scheduling process
There are four users on a radio waiting to transmit data. They have occupied the channel for a time of 3, 4, 6, and 7 respectively, and require a corresponding time of 2, 4, 6, and 7 for a round of data transmission.
  1. After airtime scheduling is enabled, the device collects the channel occupation time of the four users. The channel occupation times of User1, User2, User3, and User4 become 3, 4, 6, and 7 respectively. User1 occupies the channel for the shortest time. Therefore, the device allocates channel resources to User1 first.
  2. It takes a time of 2 for User1 to finish a round of data transmission. The channel occupation time of User1 increases to 5. The channel occupation times of User1, User2, User3, and User4 become 5, 4, 6, and 7 respectively. User2 occupies the channel for the shortest time. Therefore, the data of User2 is preferentially transmitted.
  3. It takes a time of 4 for User2 to finish a round of data transmission. The channel occupation time of User2 increases to 8. The channel occupation times of User1, User2, User3, and User4 become 5, 8, 6, and 7 respectively. User1 occupies the channel for the shortest time. Therefore, the device preferentially schedules channel resources for User1.
  4. If User1 finishes all data transmissions, the device only collects the channel occupation time of the remaining users. The channel occupation times of User2, User3, and User 4 are 8, 6, and 7 respectively. User3 occupies the channel for the shortest time. Therefore, the data of User3 is preferentially transmitted.
  5. It takes a time of 6 for User3 to finish a round of data transmission. The channel occupation time of User3 increases to 12. Channel occupation time of User2, User3, and User4 becomes 8, 12, and 7 respectively. User4 occupies the channel for the shortest time. Therefore, channel resources are preferentially scheduled for User4.
The device preferentially schedules channel resources for the user that occupies the channel for the shortest time. In this way, each user is assigned equal time to occupy the channel, ensuring fairness in channel usage.

To prevent that the first access users fail to occupy the wireless channels to transmit data, the device periodically clears all users' wireless channel occupation time. In this way, all access users have the same occupation weight.

After WMM is enabled on the device and terminals, user packets are scheduled based on different types (service types include VI, VO, BE, and BK). For example, voice packets are only scheduled with other voice packets, and video packets with other video packets.
NOTE:
If the packets of multiple users are of different types, airtime scheduling does not take effect. For example, if one user transmits voice packets and the other transmits video packets, airtime scheduling is not performed.

ACL-based Simplified Traffic Policy Configuration

A device with an ACL-based simplified traffic policy can match packet characteristics with ACLs, and provide the same QoS for packets that match ACL rules. This implements differentiated services.

To control traffic entering a network, configure an ACL to match information such as the source IP address, fragment flag, destination IP address, source port number, and source MAC address. Then, configure an ACL-based simplified traffic policy so that the device can filter packets or priority remarking that match ACL rules.

Compared with a traffic policy based on traffic classifiers, an ACL-based simplified traffic policy is easy to configure because there is no need to independently configure a traffic classifier, traffic behavior, or traffic policy. However, an ACL-based simplified traffic policy defines less matching rules than a traffic policy based on traffic classifiers.

Translation
Download
Updated: 2019-01-11

Document ID: EDOC1000176006

Views: 116597

Downloads: 309

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next