No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


CX11x, CX31x, CX710 (Earlier Than V6.03), and CX91x Series Switch Modules V100R001C10 Configuration Guide 12

The documents describe the configuration of various services supported by the CX11x&CX31x&CX91x series switch modules The description covers configuration examples and function configurations.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
QoS Overview

QoS Overview

QoS Background

Diversified services enrich people's lives but also increase the risk of Internet traffic congestion. When traffic congestion occurs, services encounter long delays or even packet loss. As a result, services deteriorate or even become unavailable. Therefore, a solution to resolve traffic congestion on the IP network is urgently needed. The best way to limit traffic congestion is to increase network bandwidths. However, increasing network bandwidths is not feasible due to the high operation and maintenance costs. The most cost-effective way is to use a "guarantee" policy to manage traffic congestion.

QoS provides an end-to-end service guarantee for differentiated services. QoS efficiently uses network resources and allows traffic of different types to preempt network resources. Voice, video, and important data applications are the first to obtain resources on network devices. QoS has played an important role in improving Internet usage performance.

QoS Service Models

  • Best-Effort

    Best-Effort is the default service model for the Internet and applies to various network applications, such as FTP and email. It is the simplest service model. Without network notification, an application can send any number of packets at any time. The network then makes its best attempt to send the packets but can provide no guarantee of performance in terms of delay and reliability.

    The Best-Effort model applies to services that do not require low delay and high reliability.

  • IntServ

    Before sending a packet, IntServ uses signaling to apply for a specific level of service from the network. The application first notifies the network of its traffic parameters. After receiving a confirmation that sufficient resources have been reserved, the application sends the packet. The network maintains a state for each packet flow and executes QoS behaviors based on this state to ensure the guaranteed application performance. The packets must be controlled within the range described by the traffic parameters.

    IntServ uses the Resource Reservation Protocol (RSVP) for signaling, which is similar to Multiprotocol Label Switching Traffic Engineering (MPLS TE). RSVP reserves resources such as bandwidth and priority on a known path, and each network element along the path must reserve required resources for data flows requiring QoS guarantee. That is, each network element maintains a soft state for each data flow. A soft state is a temporary state and is periodically updated using RSVP messages. Each network element checks whether sufficient resources can be reserved based on these RSVP messages. The path is available only when all involved network elements can provide sufficient resources.

  • DiffServ

    DiffServ classifies packets on the network into multiple classes for differentiated processing. When traffic congestion occurs, classes with a higher priority are given preference. This function allows packets to be differentiated and to have different packet loss rates, delays, and jitters. Packets of the same class are aggregated and sent as a whole to ensure the same delay, jitter, and packet loss rate.

    In the DiffServ model, edge nodes classify and aggregate traffic. Edge nodes classify packets based on a combination of fields, such as the source and destination addresses of packets, precedence in the ToS field, and protocol type. Edge nodes also re-mark packets with different priorities, which can be identified by other nodes for resource allocation and traffic control. Therefore, DiffServ is a flow-based QoS model.

    Compared with IntServ, DiffServ requires no signaling. In the DiffServ model, an application does not need to apply for network resources before transmitting packets. Instead, the application notifies the network nodes of its QoS requirements by setting QoS parameters in packets. The network does not need to maintain a state for each data flow but provides differentiated services based on the QoS parameters of each data flow. DiffServ takes full advantage of network flexibility and extensibility of the IP network and transforms information in packets into per-hop behaviors, greatly reducing signaling operations.

Components in the DiffServ Model

The DiffServ model consists of the following QoS components:

  • Traffic classification and marking

    Traffic classification classifies data packets into different classes or sets different priorities, which can be implemented through traffic classifiers in the MQC configuration. Traffic marking sets different priorities for packets, which can be implemented through priority mapping and priority re-marking.

  • Traffic policing, traffic shaping, and interface-based rate limiting

    Traffic policing and traffic shaping limit the traffic rate. When traffic exceeds the specified rate, traffic policing drops excess traffic, and traffic shaping buffers excess traffic. Interface-based rate limiting is classified into interface-based traffic policing and traffic shaping.

  • Congestion management and congestion avoidance

    Congestion management buffers packets in queues when traffic congestion occurs and determines the forwarding order based on a specific scheduling algorithm. Congestion avoidance monitors network resources. When the network becomes congested, the device drops packets to regulate traffic so that the network does not overload.

Traffic classification and marking are the basis for implementing differentiated services. Traffic policing, traffic shaping, interface-based rate limiting, congestion management, and congestion avoidance control network traffic and allocated resources.

Packets are processed by the components in the sequence outlined in Figure 13-1.
Figure 13-1 Processing sequence of QoS components

Figure 13-2 shows the processing sequence of QoS components.

Figure 13-2 Processing of QoS components
Updated: 2019-08-09

Document ID: EDOC1000041694

Views: 57897

Downloads: 3621

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Previous Next