No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Feature Description - QoS 01

NE05E and NE08E V300R003C10SPC500

This is NE05E and NE08E V300R003C10SPC500 Feature Description - QoS
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
End-to-End QoS Service Models

End-to-End QoS Service Models

Network applications require successful end-to-end communication. Traffic may traverse multiple routers on one network or even multiple networks before reaching the destination host. Therefore, to provide an end-to-end QoS guarantee, an overall network deployment is required. Service models are used to provide an end-to-end QoS guarantee based on specific requirements.

QoS provides the following types of service models:

  • Best-Effort
  • Integrated service (IntServ)
  • Differentiated service (DiffServ)

Best-Effort

Best-Effort is the default service model on the Internet and applies to various network applications, such as FTP and email. It is the simplest service model. Without network approval or notification, an application can send any number of packets at any time. The network then makes its best attempt to send the packets but does not provide any guarantee for performance.

The Best-Effort model applies to services that have low requirements for delay and reliability.

IntServ

Before sending a packet, IntServ uses signaling to apply for a specific level of service from the network. The application first notifies the network of its traffic parameters and specific service qualities, such as bandwidths and delays. After receiving a confirmation that sufficient resources have been reserved, the application sends the packets. The network maintains a state for each packet flow and executes QoS behaviors based on this state to fulfill the promise made to the application. The packets must be controlled within the range described by the traffic parameters.

IntServ uses the Resource Reservation Protocol (RSVP) as signaling, which is similar to Asynchronous Transfer Mode Static Virtual Circuit (ATM SVC), and adopts connection-oriented transmission. RSVP is a transport layer protocol and does not transmit data at the application layer. Like ICMP, RSVP functions as a network control protocol and transmits resource reservation messages between nodes.

When RSVP is used for end-to-end communication, the routers including the core routers on the end-to-end network maintain a soft state for each data flow. A soft state is a temporary state that refreshes periodically using RSVP messages. Routers check whether sufficient resources can be reserved based on these RSVP messages. The path is available only when all involved routers can provide sufficient resources.

Figure 4-1 IntServ model

IntServ uses RSVP to apply for resources over the entire network, requiring that all nodes on the end-to-end network support RSVP. In addition, each node periodically exchanges state information with its neighbor, consuming a large number of resources. More importantly, all nodes on the network maintain a state for each data flow. On the backbone network, however, there are millions of data flows. Therefore, the IntServ model applies to edge networks and does not widely apply to the backbone network.

DiffServ

DiffServ classifies packets on the network into multiple classes for differentiated processing. When traffic congestion occurs, classes with a higher priority are given preference. This function allows packets to be differentiated and to have different packet loss rates, delays, and jitters. Packets of the same class are aggregated and sent as a whole to ensure the same delay, jitter, and packet loss rate.

In the DiffServ model, edge routers classify and aggregate traffic. Edge routers classify packets based on a combination of fields, such as the source and destination addresses of packets, precedence in the ToS field, and protocol type. Edge routers also re-mark packets with different priorities, which can be identified by other routers for resource allocation and traffic control. Therefore, DiffServ is a flow-based QoS model.

Figure 4-2 DiffServ model

Compared with IntServ, DiffServ requires no signaling. In the DiffServ model, an application does not need to apply for network resources before transmitting packets. Instead, the application notifies the network nodes of its QoS requirements by setting QoS parameters in IP packet headers. The network does not maintain a state for each data flow but provides differentiated services based on the QoS parameters of each.

DiffServ takes full advantage of network flexibility and extensibility and transforms information in packets into per-hop behaviors, greatly reducing signaling operations. Therefore, DiffServ not only adapts to Internet service provider (ISP) networks but also accelerates IP QoS applications on live networks.

Translation
Download
Updated: 2019-01-14

Document ID: EDOC1100058936

Views: 3584

Downloads: 17

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next