No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Feature Description - QoS 01

NE05E and NE08E V300R003C10SPC500

This is NE05E and NE08E V300R003C10SPC500 Feature Description - QoS
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
HQoS

HQoS

Hierarchical Quality of Service (HQoS) is a technology that uses a queue scheduling mechanism to guarantee the bandwidth of multiple services of multiple users in the DiffServ model.

Traditional QoS performs 1-level traffic scheduling. The device can distinguish services on an interface but cannot identify users. Packets of the same priority are placed into the same queue on an interface and compete for the same queue resources.

HQoS uses multi-level scheduling to distinguish user-specific or service-specific traffic and provide differentiated bandwidth management.

Basic Scheduling Model

The scheduling model consists of two components: scheduler and scheduled object.



  • Scheduler: schedules multiple queues. The scheduler performs a specific scheduling algorithm to determine the order in which packets are forwarded. The scheduling algorithm can be Strict Priority (SP) or weight-based scheduling. The weight-based scheduling algorithms include Deficit Round Robin (DRR), Weighted Round Robin (WRR), Deficit Weighted Round Robin (WDRR), and Weighted Fair Queuing (WFQ). For details about scheduling algorithms, see Queues and Congestion Management.

    The scheduler performs one action: selecting a queue. After a queue is selected by a scheduler, the packets in the front of the queue are forwarded.

  • Scheduled object: refers to a queue. Packets are sequenced in queues in the buffer.

    Three configurable attributes are delivered to a queue:

    (1) Priority or weight

    (2) PIR

    (3) Drop policy, including tail drop and Weighted Random Early Detection (WRED)

    Packets may enter or leave a queue:

    (1) Entering a queue: The device determines whether to drop a received packet based on the drop policy. If the packet is not dropped, it enters the tail of the queue.

    (2) Leaving a queue: After a queue is selected by a scheduler, the packets in the front of the queue are shaped and then forwarded out of the queue.

Hierarchical Scheduling Model

HQoS uses a tree-shaped hierarchical scheduling model. As shown in Figure 7-14, the hierarchical scheduling model consists of three types of nodes:

  • Leaf node: is located at the bottom layer and identifies a queue. The leaf node is a scheduled object and can only be scheduled.
  • Transit node: is located at the medium layer and refers to both a scheduler and a scheduled object. When a transit node functions as a scheduled object, the transit node can be considered a virtual queue, which is only a layer in the scheduling architecture but not an actual queue that consumes buffers.
  • Root node: is located at the top layer and identifies the top-level scheduler. The root node is only a scheduler but not a scheduled object. The PIR is delivered to the root node to restrict the output bandwidth.
Figure 7-14 Hierarchical scheduling model

A scheduler can schedule multiple queues or schedulers. The scheduler can be considered a parent node, and the scheduled queue or scheduler can be considered a child node. The parent node is the traffic aggregation point of multiple child nodes.

Traffic classification rules and control parameters can be specified on each node to classify and control traffic. Traffic classification rules based on different user or service requirements can be configured on nodes at different layers. In addition, different control actions can be performed for traffic on different nodes. This ensures multi-layer/user/service traffic management.

Scheduling Architecture of NEs

Figure 7-15 shows class queues (CQs) and port schedulers. NEs not configured with HQoS have only CQs and port schedulers.

Figure 7-15 Scheduling architecture without HQoS

NOTE:

For the NE, a CQ is a virtual queue, which has no actual buffer units and cannot temporarily store data. A CQ is used only for final scheduling. In application, a CQ corresponds to a physical or logical interface.

A CQ has the following configurable attributes:

  • Queue priority and weight
  • PIR
  • Drop policy, including tail drop and WRED

As shown in Figure 7-16, when HQoS is configured, a router specifies a buffer for flow queues that require hierarchical scheduling and performs a round of multi-layer scheduling for these flow queues. After that, the router puts HQoS traffic and non-HQoS traffic together into the CQ for unified scheduling.

Figure 7-16 HQoS scheduling

  • Leaf node: flow queue (FQ)

    A leaf node is used to buffer data flows of one priority for a user. Data flows of each user can be classified into one to eight priorities. Each user can use one to eight FQs. Different users cannot share FQs. A traffic shaping value can be configured for each FQ to restrict the maximum bandwidth.

    FQs and CQs share the following configurable attributes:

    • Queue priority and weight
    • PIR
    • Drop policy, including tail drop and WRED
  • Transit node: subscriber queue

    An SQ indicates a user (for example, a VLAN, LSP, or PVC). You can configure the CIR and PIR for each SQ.

    Each SQ corresponds to eight FQ priorities, and one to eight FQs can be configured. If an FQ is idle, other FQs can consume the bandwidth of the FQ, but the bandwidth that can be used by an FQ cannot exceed the PIR of the FQ.

    An SQ functions as both a scheduler and a virtual queue to be scheduled.

    • As a scheduler: schedules multiple FQs. Priority queuing (PQ), Weighted Fair Queuing (WFQ), or low priority queuing (LPQ) applies to an FQ. The FQs with the service class EF, CS6, and CS7 use SP scheduling by default. The flow queues with the service class BE, AF1, AF2, AF3, and AF4 use WFQ scheduling by default, with the weight 10:10:10:15:15.
    • As a virtual queue to be scheduled: is allocated two attributes, CIR and PIR. Using metering, the SQ traffic is divided into two parts, the part within the CIR and the burst part within the PIR. The former part is paid by users, and the latter part is also called the excess information rate (EIR). The EIR can be calculated using this format: EIR = PIR - CIR. The EIR refers to the burst traffic rate, which can reach a maximum of PIR.
    • Root node: group queue (GQ)

      To simplify operation, you can define multiple users as a GQ, which is similar to a BGP peer group that comprises multiple BGP peers. For example, all users that require the same bandwidth or all premium users can be configured as a GQ.

      A GQ can be bound to multiple SQs, but an SQ can be bound only to one GQ.

      A GQ schedules SQs. DRR is used to schedule the traffic within CIR between SQs. If any bandwidth is remaining after the first round, DRR is used to schedule the EIR traffic. The bandwidth of CIR is preferentially provided, and the burst traffic exceeded the PIR is dropped. Therefore, if a GQ obtains the bandwidth of PIR, each SQ in the GQ can obtain a minimum bandwidth of CIR or even a maximum bandwidth of PIR.

      In addition, a GQ, as a root node, can be configured with a PIR attribute to restrict the sum rate of multiple member users of the GQ. All users in this GQ are restricted by the PIR. The PIR of a GQ is used for rate limit but does not provide bandwidth guarantee. The PIR of a GQ is recommended to be greater than the sum of CIRs of all its member SQs. Otherwise, a user (SQ) cannot obtain sufficient bandwidth.

    The following example illustrates the relationship between an FQ, SQ, and GQ.

    In an example, 20 residential users live in a building. Each residential user purchases the bandwidth of 20 Mbit/s. To guarantee the bandwidth, an SQ with both the CIR and PIR of 20 Mbit/s is created for each residential user. The PIR here also restricts the maximum bandwidth for each residential user. With the subscription of VoIP and IPTV services as well as the HSI services, carriers promote a new bandwidth packages with the value-added services (including VoIP and IPTV) added but the bandwidth 20 Mbit/s unchanged. Each residential user can use VoIP, IPTV, and HSI services.

    To meet such bandwidth requirements, HQoS is configured as follows:

    • Three FQs are configured for the three services (VoIP, IPTV, and HSI).
    • Altogether 20 SQs are configured for 20 residential users. The CIR and PIR are configured for each SQ.
    • One GQ is configured for the whole building and corresponds to 20 residential users. The sum bandwidth of the 20 residential users is actually the PIR of the GQ. Each of the 20 residential users uses services individually, but the sum bandwidth of them is restricted by the PIR of the GQ.

    The hierarchy model is as follows:

    • FQs are used to distinguish services of a user and control bandwidth allocation among services.
    • SQs are used to distinguish users and restrict the bandwidth of each user.
    • GQs are used to distinguish user groups and control the traffic rate of twenty SQs.

    FQs enable bandwidth allocation among services. SQs distinguish each user. GQs enable the CIR of each user to be guaranteed and all member users to share the bandwidth.

    The bandwidth exceeds the CIR is not guaranteed because it is not paid by users. The CIR must be guaranteed because the CIR has been purchased by users. As shown in Figure 7-16, the CIR of users is marked, and the bandwidth is preferentially allocated to guarantee the CIR. Therefore, the bandwidth of CIR will not be preempted by the burst traffic exceeded the service rates.

    On NEs, HQoS uses architectures to schedule downstream queues.

HQoS Scheduling for Downstream Queues

  • Downstream scheduling

    Figure 7-17 scheduling architecture for downstream queues

    Downstream eTM scheduling is a five-level scheduling architecture. Downstream scheduling uses only FQs but not CQs. In addition to the scheduling path FQ -> SQ -> GQ, a parent GQ scheduling, also called a virtual interface (VI) scheduling, is implemented.

    NOTE:
    The VI is only a name of a scheduler but not a real virtual interface. In actual applications, a VI corresponds to a sub-interface or a physical interface. The VI refers to different objects in different applications.
    Table 7-1 Parameters for downstream scheduling
    Queue/Scheduler Queue Attribute Scheduler Attribute
    FQ
    • Queue priority and weight, which can be configured.
    • PIR, which can be configured. The PIR is not configured by default.
    • Drop policy, which can be configured as WRED. The drop policy is tail drop by default.
    -
    SQ
    • CIR, which can be configured.
    • PIR, which can be configured.
    • To be configured.
    • PQ and WFQ apply to FQs. By default, PQ applies to EF, CS6, and CS7 services; WFQ applies to BE, AF1, AF2, AF3 and AF4 services.
    GQ
    • PIR, which can be configured. The PIR is used to restrict the bandwidth but does not provide any bandwidth guarantee.
    • Not to be configured.
    • DRR is used to schedule the traffic within CIR between SQs. If any bandwidth is remaining after the first round, DRR is used to schedule the EIR traffic. The bandwidth of CIR is preferentially provided, and the burst traffic exceeded the PIR is dropped.
    Parent GQ/VI
    • PIR, which can be configured. The PIR is used to restrict the bandwidth but does not provide any bandwidth guarantee.
    • Different scheduling algorithms, such as DRR and WFQ, are used on different boards.
    Port
    • PIR, which can be configured.
    • To be configured.
    • PQ and WFQ apply to FQs. By default, PQ applies to EF, CS6, and CS7 services; WFQ applies to BE, AF1, AF2, AF3 and AF4 services.

HQoS Applications and Classification

Use the following HQoS applications as examples:

  • No QoS configuration: If QoS is not configured on an outbound interface and the default configuration is used, the scheduling path is 1 port queue -> 1 VI queue -> 1 GQ -> 1 SQ -> 1 FQ. Only one queue is scheduled at each layer. Therefore, the scheduling path can be considered an FIFO queue.
  • Distinguishing only service priorities: If an outbound interface is configured only to distinguish service priorities, the scheduling path is 1 port queue -> 1 VI queue -> 1 GQ -> 1 SQ -> 8 FQs. Therefore, the scheduling path can be considered port->FQ.
  • Distinguishing service priorities + users: As shown in the following figure, an L3 gateway is connected to an RTN that has three base stations attached. To distinguish the three base stations on GE 1/0/0 of the L3 gateway and services of different priorities from the three base stations, configure the hierarchy architecture as port -> base station -> base station service, corresponding to the scheduling path: port -> 1 VI queue -> 1 GQ -> 3 SQs -> 8 FQs.



  • Distinguishing priorities + users + aggregation devices: As shown in the following figure, an L3 gateway is connected to two RTNs (aggregation devices) that each has three base stations attached. To distinguish the two RTNs on GE 1/0/0 of the L3 gateway, three base stations on each RTN, and services of different priorities on the three base stations, configure the hierarchy architecture as port -> RTN -> base station -> base station services, corresponding to the scheduling path: 1 port -> 1 VI queue -> 2 GQs -> 3 SQs -> 8 FQs.



HQoS is configured by Qos profile.

  • Profile-based HQoS

    Traffic that enters different interfaces can be scheduled in an SQ.

    Profile-based HQoS implements QoS scheduling management for access users by defining various QoS profiles and applying the QoS profiles to interfaces. A QoS profile is a set of QoS parameters (such as the queue bandwidth and flow queues) for a specific user queue.

    Profile-based HQoS supports downstream scheduling.

    As shown in Figure 7-18, the router, as an edge device on an ISP network, accesses a local area network (LAN) through E-Trunk 1. The LAN houses 1000 users that have VoIP, IPTV, and common Internet services. Eth-Trunk 1.1000 accesses VoIP services; Eth-Trunk 1.2000 accesses IPTV services; Eth-Trunk 1.3000 accesses other services. The 802.1p value in the outer VLAN tag is used to identify the service type (802.1p value 5 for VoIP services and 802.1p value 4 for IPTV services). The VID that ranges from 1 to 1000 in the inner VLAN tag identifies the user. The VIDs of Eth-Trunk 1.1000 and Eth-Trunk 1.2000 are respectively 1000 and 2000. It is required that the sum bandwidth of each user be restricted to 120 Mbit/s, the CIR be 100 Mbit/s, and the bandwidth allocated to VoIP and IPTV services of each user are respectively 60 Mbit/s and 40 Mbit/s. Other services are not provided with any bandwidth guarantee.

    Figure 7-18 Profile-based HQoS

    You can configure profile-based HQoS to meet the preceding requirements. Only traffic with the same inner VLAN ID enters the same SQ. Therefore, 1000 SQs are created. Traffic with the same inner VLAN ID but different outer VLAN IDs enter different FQs in the same SQ.

HQoS Implementation on a Low-speed Links Interface

Interface-based HQoS allows an interface to function as a user. It means that all packets on an interface belong to an SQ.

Interface-based HQoS supports upstream and downstream scheduling.

Translation
Download
Updated: 2019-01-14

Document ID: EDOC1100058936

Views: 3521

Downloads: 17

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next