No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - QoS

CloudEngine 12800 and 12800E V200R003C00

This document describes the configurations of QoS functions, including MQC, priority mapping, traffic policing, traffic shaping, interface-based rate limiting, congestion avoidance, congestion management, packet filtering, redirection, traffic statistics, and ACL-based simplified traffic policy.
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Overview of QoS

Overview of QoS

QoS Background

Diversified services result in a sharp increase in network traffic, which may cause network congestion, increase forwarding delay, or even packet loss. Any of the preceding situations will cause service quality deterioration or even service interruption. Therefore, real-time services require a solution to prevent network congestion. The best solution is to increase network bandwidth, but increasing network bandwidth is not cost effective. The most cost-effective way is to use a "guarantee" policy to manage traffic congestion.

Quality of service (QoS) technology provides end-to-end service quality guarantee based on the requirements of different services. It is a tool that helps improve utilization of network resources and allows different types of traffic to preempt network resources based on their priorities. Traffic of voice, video, and other important data applications is processed preferentially on network devices. QoS is now widely used and plays an important role in Internet applications.

QoS Service Models

  • Best-Effort

    Best-Effort is the default service model for the Internet and applies to various network applications, such as the File Transfer Protocol (FTP) and email. It is the simplest service model, in which an application can send any number of packets at any time without notifying the network. The network then makes the best effort to transmit the packets but provides no guarantee of performance in terms of delay and packet loss rate.

    The Best-Effort model is suitable for services that do not have high requirement for the delay packet loss rate.

  • Integrated Services (IntServ)

    In the IntServ model, an application uses a signaling protocol to notify the network of its traffic parameters and apply for a specific level of QoS before sending packets. The network reserves resources for the application based on the traffic parameters. After the application receives an acknowledgement message and confirms that sufficient resources have been reserved, it starts to send packets within the range specified by the traffic parameters. The network maintains a state for each packet flow and performs QoS behaviors based on this state to ensure a guaranteed application performance.

    The IntServ model uses the Resource Reservation Protocol (RSVP) as the signaling protocol. The RSVP protocol reserves resources such as bandwidth and priority on a known path, and each network element along the path must reserve required resources for data flows requiring QoS guarantee. That is, each network element maintains a soft state for each data flow. A soft state is a temporary state that is periodically updated through RSVP messages. Each network element checks whether sufficient resources can be reserved based on these RSVP messages. The path is available only when all involved network elements can provide sufficient resources.

  • Differentiated Services (DiffServ)

    The DiffServ model classifies packets on a network into multiple classes and takes different actions for the classes. When network congestion occurs, packets of different classes are processed based on their priorities to obtain different packet loss rates, delays, and jitters. Packets of the same class are aggregated and sent together to ensure the same delay, jitter, and packet loss rate.

    In the DiffServ model, traffic classification and aggregation are completed on edge nodes. Edge nodes classify packets based on a combination of fields in packets, such as the source and destination addresses, precedence in the Type of Service (ToS) field, and protocol type, and then mark packets with different priorities. Other nodes only need to identify the marked priorities for resource allocation and traffic control.

    Unlike the IntServ model, the DiffServ model does not require a signaling protocol. In this model, an application does not need to apply for network resources before sending packets. Instead, the application sets QoS parameters in the packets, through which the network can learn the QoS requirements of the application. The network provides differentiated services based on the QoS parameters of each data flow and does not need to maintain a state for each data flow. DiffServ takes full advantage of IP networks' flexibility and extensibility and transforms information in packets into per-hop behaviors (PHBs), greatly reducing signaling operations. DiffServ is the most commonly used QoS model on current networks. QoS implementation described in the subsequent sections is based on this model.

Components in the DiffServ Model

The DiffServ model involves the following QoS mechanisms:

  • Traffic classification and marking

    Traffic classification and marking are prerequisites for differentiated services. Traffic classification divides data packets into different classes or sets different priorities, and can be implemented using traffic classifiers configured on the Modular QoS Command Line Interface (MQC). Traffic marking sets different priorities for packets and can be implemented through priority mapping and re-marking.

  • Traffic policing, traffic shaping, and interface-based rate limiting

    Traffic policing and traffic shaping control the traffic rate within a bandwidth limit. Traffic shaping drops excess traffic when the traffic rate exceeds the limit, whereas traffic shaping buffers excess traffic. Traffic policing and traffic shaping can be performed on an interface to implement interface-based rate limiting.

  • Congestion management and congestion avoidance

    Congestion management buffers packets in queues upon network congestion and determines the forwarding order using a specific scheduling algorithm. Congestion avoidance monitors network resource usage and drops packets to mitigate network overloading when congestion worsens.

Traffic classification and marking are the basis of differentiated services. Traffic policing, traffic shaping, interface-based rate limiting, congestion management, and congestion avoidance control network traffic and resource allocation to implement differentiated services.

Figure 1-1 shows the order in which different QoS mechanisms process packets.
Figure 1-1 QoS processing order

Figure 1-2 shows where the QoS mechanisms are implemented.

Figure 1-2 Location of QoS mechanisms
Translation
Download
Updated: 2019-05-05

Document ID: EDOC1100004202

Views: 21598

Downloads: 24

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next