Overall QoS Process
QoS Implementation During Packet Forwarding
As shown in Figure 5-1, the QoS implementation during packet forwarding is as follows:
- On the upstream PFE:
- The upstream PFE initializes the internal priority of packets (service class as BE and color as green).
- The upstream PFE implements BA traffic classification based on the inbound interface configuration. BA traffic classification requires the upstream PFE to obtain the priority field value (802.1p, DSCP or MPLS EXP) for traffic classification and modify the internal priority of packets (service class and color).
- The upstream PFE implements MF traffic classification based on the inbound interface configuration. MF traffic classification modifies the upstream PFE to obtain multiple field information for traffic classification. After that, the upstream PFE implements related behaviors (such as filter, re-mark, or redirect). If the behavior is re-mark, the upstream PFE modifies the internal priority of packets (service class and color).
- The upstream PFE searches the routing table for an outbound interface of a packet based on its destination IP address.
- The upstream PFE implements CAR for packets based on the inbound interface configuration or MF traffic classification profile. If both interface-based CAR and MF traffic classification-based CAR are configured, MF traffic classification-based CAR takes effect. In a CAR operation, a pass, drop, or pass+re-mark behavior can be performed for incoming traffic. If the behavior is pass+re-mark, the upstream PFE modifies the internal priority of packets (service class and color).
- Then, packets are sent to the upstream TM.
- On the upstream TM:
- The upstream TM processes flow queues based on the inbound interface configuration or MF traffic classification configuration. If both interface-based user-queue and MF traffic classification-based user-queue are configured, MF traffic classification-based user-queue takes effect. Packets are put into different flow queues based on the service class, and WRED drop policy is implemented for flow queues based on the color if needed.
- The upstream TM processes VOQs. VOQs are classified based on the destination board. The information about the destination board is obtained based on the outbound interface of packets. Then, packets are put into different VOQs based on the service class.
- After being scheduled in VOQs, packets are sent to the switched network and then forwarded to the destination board on which the outbound interface is located.
- Then, packets are sent to the downstream TM.
- On the downstream TM
- (This step is skipped when the downstream PIC is equipped with an eTM subcard) The downstream TM processes flow queues based on the user-queue configuration on the outbound interface. Packets are put into different flow queues based on the service class, and WRED drop policy is implemented for flow queues based on the color if needed.
- (This step is skipped when the downstream PIC is equipped with an eTM subcard) The downstream TM processes port queues (CQs). Packets are put into different CQs based on the service class, and WRED drop policy is implemented for CQs based on the color if WRED is configured.
- Then, packets are sent to the downstream PFE.
- On the downstream PFE:
- The downstream PFE implements MF traffic classification based on the outbound interface configuration. MF traffic classification requires the downstream PFE to obtain multiple field information for traffic classification. Behaviors, such as filter and re-mark, are performed based on traffic classification results. If the behavior is re-mark, the downstream PFE modifies the internal priority of packets (service class and color).
- The downstream PFE implements CAR for packets based on the outbound interface configuration or MF traffic classification configuration. If both interface-based CAR and MF traffic classification-based CAR are configured, MF traffic classification-based CAR takes effect. In a CAR operation, a pass, drop, or pass+re-mark behavior can be performed for incoming traffic. If the behavior is pass+re-mark, the downstream PFE modifies the internal priority of packets (service class and color).
- The priorities of outgoing packets are set for newly added packet headers and are modified for existing packet headers, based on the service class and color.
- Then, packets are sent to the downstream PIC.
- When the PIC is not equipped with an eTM subcard, the PIC adds the link-layer CRC to the packets before sending them to the physical link.
- When the PIC is equipped with an eTM subcard, the PIC adds the link-layer CRC to the packets and performs a round of flow queue scheduling before sending the packets to the physical link. Downstream flow queues are processed based on the user-queue configuration on the outbound interface. Packets are put into different FQs based on the service class, and WRED drop policy is implemented for FQs based on the color if WRED is configured. When the PIC is equipped with an eTM subcard, downstream packets are not scheduled on the downstream TM.
Packet Field Changes During Packet Forwarding
After CAR and traffic shaping are performed for packets, the bandwidth calculation is closely related to the packet length. Therefore, the packet field changes during packet forwarding require attention.
For example, packet field changes in some common scenarios are described in the following part.
CAR calculates the bandwidth of packets based on the entire packet. For example, CAR counts the length of the frame header and CRC field but not the preamble, inter frame gap, or SFD of an Ethernet frame in the bandwidth. The following figure illustrates a complete Ethernet frame (bytes).
The bandwidth covers the CRC field but not the IFG field.
- The upstream PFE adds a Frame Header, which is removed by the downstream PFE. The Frame header is used to transfer information between chips. NPtoTM and TMtoNP fields are used to transfer information between the NP and TM.
- When the PIC is not equipped with an eTM subcard, the length of a packet scheduled on the downstream TM is different from that of the packet sent to the link. To perform traffic shaping accurately, you must run the network-header-length command to compensate the packet with a specific length.
- when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header (14 bytes), two MPLS Labels, or CRC fields, compared with the packet sent to the link. A +26-byte compensation (including the L2 header, two MPLS labels, and CRC field, but not including the IFG field) or a +46-byte compensation (including the IFG field) can be performed for the packet.
- When the PIC is equipped with an eTM subcard, no packet length compensation is required.
On the downstream interface on the user side:
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header (14 bytes), VLAN tag, or CRC fields, compared with the packet sent to the link. A +22-byte compensation (including the L2 header, VLAN tag, and CRC field, but not including the IFG field) or a +42-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
On the downstream interface on the network side:
- when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header (14 bytes), two MPLS Labels, or CRC fields, compared with the packet sent to the link. A +26-byte compensation (including the L2 header, two MPLS labels, and CRC field, but not including the IFG field) or a +46-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
On the downstream interface on the network side:
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, VLAN tag, or CRC fields, compared with the packet sent to the link. A +8-byte compensation (including the VLAN tag (4 bytes) and CRC (4 bytes) field, but not including the IFG field) or a +28-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
In Layer Ethernet forwarding scenarios, a data frame can be a VLAN-tagged, QinQ-tagged, or untagged frame. Use a VLAN-tagged frame as an example. In Layer 2 forwarding, both the Layer 2 Ethernet frame header and the VLAN tag of a packet are forwarded to the downstream TM, and only the CRC field is removed. when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the CRC field, compared with the packet sent to the link. A +4-byte compensation (not including the IFG field) or a +24-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
On the downstream interface on the user side:
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header, VLAN tag, or CRC fields, compared with the packet sent to the link. A +22-byte compensation (including the L2 header, VLAN tag, and CRC field, but not including the IFG field) or a +42-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
On the downstream interface on the user side:
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header, two VLAN tags, or CRC fields, compared with the packet sent to the link. A +26-byte compensation (including the L2 header, two VLAN tags, and CRC field, but not including the IFG field) or a +46-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
On the downstream interface:
- when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the PPP header, compared with the packet sent to the link. A +8-byte compensation can be performed for the packet.
- When the PIC is equipped with an eTM subcard, no packet length compensation is required.
On the downstream interface on the user side:
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the IFG, L2 Header (14 bytes), two VLAN tags, or CRC fields, compared with the packet sent to the link. A +26-byte compensation (including the L2 header, two VLAN tags, and CRC field, but not including the IFG field) or a +46-byte compensation (including the IFG field) can be performed for the packet.
For the , the packet scheduled on the downstream TM does not contain a Frame Header. Therefore, a +26-byte compensation (not including the 20-byte IFG field) or a +46-byte compensation (including the IFG field) can be performed for the packet.
On the downstream interface on the user side:
- when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the PPP header (8 bytes), compared with the packet sent to the link. A +8-byte compensation can be performed for the packet.
- When the PIC is equipped with an eTM subcard, no packet length compensation is required.
In VLAN mapping scenarios, both the Layer 2 Ethernet frame header and the VLAN tag of a packet are forwarded to the downstream TM, and only the CRC field is removed. The VLAN tag value is replaced with a new VLAN tag value.
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the CRC field, compared with the packet sent to the link. A +4-byte compensation (not including the IFG field) or a +24-byte compensation (including the IFG field) can be performed for the packet.
For more details, see Incoming packet in sub-interface accessing L3VPN networking.
In VLL heterogeneous interworking scenarios, both the L2 header and MPLS label of a packets are removed on the upstream TM.
- when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the PPP header, compared with the packet sent to the link. A +8-byte compensation can be performed for the packet.
- When the PIC is equipped with an eTM subcard, no packet length compensation is required.
In VLL heterogeneous interworking scenarios, both the L2 header and MPLS label of a packets are removed on the upstream TM.
when the downstream TM implements traffic shaping for packets, the TNtoNP and Frame Header field values of the packets are not calculated. Therefore, the packet scheduled on the downstream TM does not contain the PPP header, compared with the packet sent to the link. A +8-byte compensation can be performed for the packet.
Supplement to Packet Length Compensation
The network-header-length command used to configure packet length compensation is configured in the service template. Certain service templates have been predefined on NE20Es. Using these service templates, you do not need to calculate the length required in a compensation
To manually configure the compensation length, you can run the display interface command to view statistics on the outbound interface and calculate the length of the packet sent to the link (L1), and run the display port-queue command to view statistics about the queues and calculate the length of the packet scheduled on the downstream TM (L2). The compensation length is obtained in this formula: Length compensation = L1 - L2. For example:
<HUAWEI> display interface gigabitethernet 0/1/0 …… Statistics last cleared:2017-06-08 11:25:55 Last 300 seconds input rate: 13848728 bits/sec, 10856 packets/sec Last 300 seconds output rate: 13183454 bits/sec, 9111 packets/sec Input peak rate 14347000 bits/sec, Record time: 2017-06-08 11:27:31 Output peak rate 13271131 bits/sec, Record time: 2017-06-08 11:30:05 Input: 1304984264 bytes, 9341140 packets Output: 1256201849 bytes, 7740964 packets …… <HUAWEI> display port-queue statistics interface gigabitethernet 0/1/0 outbound GigabitEthernet0/1/0 outbound traffic statistics: [be] Current usage percentage of queue: 0 Total pass: 411,963 packets, 335,927,398 bytes …… [af1] Current usage percentage of queue: 0 Total pass: 172,616 packets, 141,875,765 bytes …… [af2] Current usage percentage of queue: 0 Total pass: 54,516 packets, 45,592,293 bytes …… [af3] Current usage percentage of queue: 0 Total pass: 53,650 packets, 44,916,566 bytes …… [af4] Current usage percentage of queue: 0 Total pass: 53,650 packets, 44,915,912 bytes …… [ef] Current usage percentage of queue: 0 Total pass: 54,516 packets, 45,598,519 bytes …… [cs6] Current usage percentage of queue: 0 Total pass: 63,288 packets, 47,061,713 bytes …… [cs7] Current usage percentage of queue: 0 Total pass: 6,895,327 packets, 551,385,377 bytes ……
In the preceding information, L1 = 1256201849 bytes/7740964 packets= 162bytes/packet.
L2= sum of forwarded bytes in eight queues/sum of forwarded packets in eight queues = 1257273543bytes/7759526packet=162bytes/packet.
Therefore, the compensation value can be calculated using the formula of compensation value = L1–L2 = 0.
Template Name | Conversion Type |
---|---|
bridge-outbound | Bridge packet conversion in the outbound direction of the tunnel To be specific, this profile applies to the scenario where traffic shaping is implemented on the outbound interface in Layer 2 Ethernet forwarding scenarios. For details about the scenarios and compensation values, see Packet in Layer 2 Ethernet forwarding scenarios. |
ip-outbound | IP packet conversion or IP-to-802.1Q packet conversion on the outbound interface. To be specific, this profile applies to the scenario where traffic shaping is implemented on an 802.1Q outbound interface of an egress PE in IP forwarding or L3VPN/GRE scenarios. For details about the scenarios and compensation values, see IP-to-802.1Q packet conversion on the outbound interface in IP forwarding scenarios. |
ip-outbound1 | IP-to-QinQ packet conversion on the outbound interface. To be specific, this profile applies to the scenario where traffic shaping is implemented on a QinQ outbound interface of an egress PE in IP forwarding or L3VPN/GRE scenarios. For details about the scenarios and compensation values, see IP-to-QinQ packet conversion on the outbound interface in IP forwarding scenarios. |
ip-outbound2 | IP packet conversion on the outbound POS interface. To be specific, this profile applies to the scenario where traffic shaping is implemented on a POS outbound interface of an egress PE in IP forwarding or L3VPN/GRE scenarios. For details about the scenarios and compensation values, see Outgoing IP packet on the POS interface in IP forwarding scenarios. |
l3vpn-outbound1 | L3VPN-to-QinQ packet conversion on the outbound interface. To be specific, this profile applies to the scenario where traffic shaping is implemented on a QinQ outbound interface of an egress PE in L3VPN scenarios. For details about the scenarios and compensation values, see Outgoing L3VPN packet on the user side of the PE in QinQ interface accessing L3VPN networking. |
l3vpn-outbound2 | L3VPN packet conversion on the outbound POS interface. To be specific, this profile applies to the scenario where traffic shaping is implemented on a POS outbound interface of an egress PE in L3VPN scenarios. For details about the scenarios and compensation values, see Outgoing L3VPN packet on the user side of the PE in POS interface accessing L3VPN networking. |
pbt-outbound | PBT packet conversion on the outbound interface. This profile is reserved for future use. |
vlan-mapping-outbound | VLAN mapping packet conversion on the outbound interface. To be specific, this profile applies to the scenario where traffic shaping is implemented for outgoing packets in VLAN mapping scenarios. For details about the scenarios and compensation values, see Outgoing packet in VLAN mapping scenarios. |
vll-outbound | VLL packet conversion on heterogeneous medium on the outbound POS interface or VLL-to-QinQ packet conversion in the outbound direction of the tunnel. To be specific, this profile applies to the scenario where traffic shaping is implemented on a POS outbound interface on the AC side of the egress PE in VLL heterogeneous interworking scenarios, or the scenario where traffic shaping is implemented on a QinQ outbound interface on the AC side of the egress PE in common VLL scenarios. For details about the scenarios and compensation values, see Outgoing packet in POS interface accessing VLL heterogeneous interworking scenarios. |
vll-outbound1 | VLL packet conversion on homogeneous medium on the outbound POS interface or VLL-to-Dot1Q packet conversion in the outbound direction of the tunnel. To be specific, this profile applies to the scenario where traffic shaping is implemented on a POS outbound interface on the AC side of the egress PE in VLL homogeneous interworking scenarios, or the scenario where traffic shaping is implemented on an 802.1Q outbound interface on the AC side of the egress PE in common VLL scenarios. For details about the scenarios and compensation values, see Outgoing packet in POS interface accessing VLL homogeneous interworking scenarios. |
vpls-outbound | VPLS-to-802.1Q packet conversion in the outbound direction of the tunnel. To be specific, this profile applies to the scenario where traffic shaping is implemented on an 802.1Q outbound interface on the AC side of the egress PE in VPLS scenarios. For details about the scenarios and compensation values, see Outgoing packet in sub-interface accessing VPLS networking. |
vpls-outbound1 | VPLS-to-QinQ packet conversion in the outbound direction of the tunnel. To be specific, this profile applies to the scenario where traffic shaping is implemented on a QinQ outbound interface on the AC side of the egress PE in VPLS scenarios. For details about the scenarios and compensation values, see Outgoing packet in sub-interface accessing VPLS networking. |