No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

CX11x, CX31x, CX710 (Earlier Than V6.03), and CX91x Series Switch Modules V100R001C10 Configuration Guide 12

The documents describe the configuration of various services supported by the CX11x&CX31x&CX91x series switch modules The description covers configuration examples and function configurations.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
DCB Principles

DCB Principles

This section describes the implementation of DCB.

Data Center Bridging (DCB) is a set of extensions to Ethernet for use in a data center environment, which is defined by the IEEE 802.1 working group. DCB is used to build lossless Ethernet, meeting QoS requirements on a converged data center network.

DCB protocols include:

PFC

Background

SAN traffic is sensitive to packet loss on a converged network.

The Ethernet Pause mechanism ensures the lossless transmission service. When a downstream device detects that its receive capability is lower than the transmit capability of its upstream device, it sends Pause frames to the upstream device, requesting the upstream device to stop sending traffic for a period of time. The Ethernet Pause mechanism stops all traffic on a link, whereas FCoE requires link sharing:
  • Burst traffic of one type cannot affect forwarding of traffic of other types.
  • A large amount of traffic of one type in a queue cannot occupy buffer resources of traffic of other types.

Priority-based Flow Control (PFC) addresses the contradiction between the Ethernet Pause mechanism and link sharing.

Principles

PFC is also called Per Priority Pause or Class Based Flow Control (CBFC). It enhances the Ethernet Pause mechanism. PFC is a kind of flow control mechanism based on priority. As shown in Figure 10-50, eight priority queues on the transmit interface of DeviceA correspond to eight buffers on the receive interface of DeviceB. When a receive buffer on DeviceB is congested, DeviceB sends a backpressure signal to DeviceA, requesting DeviceA to stop sending packets in the corresponding priority queue.

Figure 10-50 PFC working mechanism
Table 10-10 describes the mappings between packet priorities and interface queues.
Table 10-10 Mappings between packet priorities and interface queues

Packet Type

Priority

Queue

Unicast

0

0

1

1

2

2

3

3

4

4

5

5

6

6

7

7

Unknown unicast

0

0

1, 2, 3

1

4 and 5

2

6 and 7

6

A backpressure signal is an Ethernet frame. Figure 10-51 shows the PFC frame format.
Figure 10-51 PFC frame format
Table 10-11 Fields in a PFC frame

Item

Description

Destination address

Destination MAC address, which has a fixed value of 01-80-c2-00-00-01.

Source address

Source MAC address.

Ethertype

Ethernet frame type. The value is 88-08.

Control opcode

Control code. The value is 01-01.

Priority enable vector

Priority-enable vector.

E(n) corresponds to queue n and determines whether backpressure is enabled for queue n. When E(n) is 1, backpressure is enabled for queue n and the backpressure time is Time(n). When E(n) is 0, backpressure is disabled for queue n.

Time(0) to Time(7)

Backpressure timer.

If Time(n) is 0, backpressure is canceled.

Pad(transmit as zero)

Reserved.

The value is 0 during PFC frame transmission.

CRC

Cyclic Redundancy Check (CRC).

When receiving backpressure signals, a device only stops traffic in one or several priority queues, but does not stop traffic on the entire interface. PFC can pause or restart any queue, without interrupting traffic in other queues. This feature enables traffic of various types to share one FCoE link. The system does not apply the backpressure mechanism to the priority queues with PFC disabled and directly discards packets in these queues when congestion occurs.

In an FCoE environment, an administrator can apply PFC to queues of FCoE traffic to ensure lossless transmission of FCoE service.

ETS

Background

A converged data center network has LAN traffic, SAN traffic, and IPC traffic. The converged network has high QoS requirements. For example, SAN traffic is sensitive to packet loss and relies on in-order deliver. IPC traffic is exchanged between servers and requires low latency. LAN traffic allows packet loss and is delivered on a best-effort (BE) basis. Traditional QoS cannot meet requirements of the converged network, whereas ETS uses a hierarchical flow control mechanism to implement QoS on the lossless Ethernet.

Principles
ETS provides two-level scheduling: scheduling based on priority groups and scheduling based on queues, as shown in Figure 10-52. An interface first schedules priority groups, and then schedules priority queues.
Figure 10-52 ETS Process

Compared with common QoS, ETS provides scheduling based on priority groups. ETS adds traffic of the same type to a priority group so that traffic of the same type obtains the same CoS.

Scheduling Based on Priority Groups

A priority group is a group of priority queues using the same scheduling mode. You can add queues with different priorities to a priority group. Scheduling based on the priority group is called level-1 scheduling.

ETS defines three priority groups: PG0, PG1, and PG15. PG0, PG1, and PG15 process LAN traffic, SAN traffic, and IPC traffic respectively.

Table 10-12 describes attributes of priority groups by default.
Table 10-12 Scheduling based on priority groups

Priority Group ID

Priority Queue

Scheduling Mode

Bandwidth Usage

Traffic Type

PG0

0, 1, 2, 4, 5

DRR

50%

LAN flow

PG1

3

DRR

50%

SAN flow

PG15

6, 7

PQ

-

IPC flow

As defined by ETS, PG0 and PG1 use DRR and PG15 use PQ. PG15 uses priority queuing (PQ) to schedule delay-sensitive IPC traffic. PG0 and PG1 use Deficit Round Robin (DRR). Bandwidth can be allocated to priority groups based on actual networking.

As shown in Table 10-12, the queue with priority 3 carries FCoE traffic, so this queue is added to the SAN group (PG1). Queues with priorities 0, 1, 2, 4, and 5 carry LAN traffic, so these queues are added to the LAN group (PG0). The queue with priorities 6, 7 carries IPC traffic, so this queue is added to the IPC group (PG15). The total bandwidth of the interface is 10 Gbit/s. PG15 obtains 2 Gbit/s. PG1 and PG0 each obtain 50% of the total bandwidth, 4 Gbit/s.

Figure 10-53 Congestion management based on priority groups

As shown in Figure 10-53, all traffic can be forwarded because the total traffic on the interface is within the interface bandwidth at t1 and t2. At t3, the total traffic exceeds the interface bandwidth and LAN traffic exceeds given bandwidth. At this time, LAN traffic is scheduled based on ETS parameters and 1 Gbit/s LAN traffic is discarded.

ETS also provides traffic shaping based on priority groups. This traffic shaping mechanism limits traffic bursts in a priority group to ensure that traffic in this group is sent out at an even rate. For details, see Traffic Shaping in CX11x&CX31x&CX91x Series Switch Modules Configuration Guide - QoS Configuration.

Priority-based Scheduling

ETS also provides priority-based queue scheduling, level-2 scheduling.

In addition, ETS provides priority-based queue congestion management, queue shaping, and queue congestion avoidance. For details, see CX11x&CX31x&CX91x Series Switch Modules Configuration Guide - QoS Configuration.

DCBX

Background

To implement lossless Ethernet on a converged data center network, both ends of an FCoE link must have the same PFC and ETS parameter settings. If PFC and ETS parameters are manually configured, the administrator's workload is heavy and configuration errors may occur. DCBX, as a link discovery protocol, enables DCB devices at both ends of a link to discover and exchange DCB configurations, reducing workloads of administrators.

Principles
DCBX provides the following functions:
  • Discovers the DCB configuration of the remote device.
  • Detects the DCB configuration errors of the remote device.
  • Configures the remote device if permitted.
DCBX enables DCB devices at both ends to exchange the following DCB configurations:
  • ETS priority group
  • PFC

DCBX encapsulates DCB configurations into Link Layer Discovery Protocol (LLDP) TLVs so that devices at both ends of an FCoE virtual link can exchange DCB configurations. For details about LLDP, see LLDP Configuration in CX11x&CX31x&CX91x Series Switch Modules Configuration Guide - Network Management Configuration.

In Figure 10-54, PFC is used as an example to describe DCBX implementation through LLDP.
Figure 10-54 DCBX implementation through LLDP

As shown in Figure 10-54, LLDP is enabled globally and on PortA and PortB, and PortA is configured to send DCBX TLVs. The implementation is as follows:
  1. Set PFC parameters on PortA and PortB, and enable DCBX. The DCBX module instructs PortA and PortB to encapsulate their PFC parameters into LLDPDUs and send the LLDPDUs to each other.
  2. The LLDP module of PortA sends LLDPDUs with DCBX TLVs to PortB at intervals.
  3. PortB parses the DCBX TLVs in the received LLDPDUs and sends PFC parameters of PortA to the DCBX module. The DCBX module compares PFC parameters of PortA with its PFC parameters. Through negotiation, PFC parameters on the two ends are consistent, and a configuration file is then generated.
DCBX TLV

As shown in Figure 10-55, the DCB configuration is encapsulated into specified TLVs. The Type field has a fixed value of 127, and the OUI field varies depending on the protocol type. The OUI field of the IEEE DCBX is 0x0080c2, and the OUI field of the INTEL DCBX is 0x001b21.

Figure 10-55 DCBX TLV format
DCBX TLVs include the ETS Configuration TLV, ETS Recommendation TLV, PFC Configuration TLV and App TLV. Table 10-13 describes the DCBX TLVs.
Table 10-13 IEEE DCBX TLVs

TLV

Subtype

Length

Description

ETS Configuration TLV

09

25

Local ETS configuration:

  • Priority group configuration: priority group ID and bandwidth usage of a priority group
  • Priority queue configuration: priority queue ID and its priority group ID

ETS Recommendation TLV

0A

25

Recommended ETS configuration, used for ETS configuration negotiation between both ends of an FCoE virtual link:

  • Priority group configuration: priority group ID and bandwidth usage of a priority group
  • Priority queue configuration: priority queue ID and its priority group ID

PFC Configuration TLV

0B

6

Local PFC configuration:

  • Priority queue ID
  • Whether PFC is applied to a queue

App TLV

0C

Unfixed value

Carried only when PFC is configured to work in auto mode for interconnection between products and between NICs.

DCBX TLVs include the ETS Configuration TLV, ETS Recommendation TLV, PFC Configuration TLV and App TLV. Table 10-14 and Table 10-15 describe the DCBX TLVs.
Table 10-14 INTEL DCBX v1.00 TLVs

TLV

Subtype

Length

Description

DCBX Control Sub-TLV

01

10

Information of DCBX packets.

Priority Group Sub-TLV

02

28

Recommended ETS configuration, used for ETS configuration negotiation between both ends of an FCoE virtual link:

  • Bandwidth usage of a priority group
  • Priority group ID

Priority Flow Control Sub-TLV

03

5

Local PFC configuration:

  • Priority queue ID
  • Whether PFC is applied to a queue

App Sub-TLV

05

Unfixed value

Carried only when PFC is configured to work in auto mode for interconnection between products and between NICs.

Table 10-15 INTEL DCBX v1.01 TLVs

TLV

Subtype

Length

Description

DCBX Control Sub-TLV

01

10

Information of DCBX packets.

Priority Group Sub-TLV

02

17

Recommended ETS configuration, used for ETS configuration negotiation between both ends of an FCoE virtual link:

  • Priority group configuration: priority group ID and bandwidth usage of a priority group
  • Priority queue configuration: priority queue ID and its priority group ID

Priority Flow Control Sub-TLV

03

6

Local PFC configuration:

  • Priority queue ID
  • Whether PFC is applied to a queue

App Sub-TLV

04

Unfixed value

Carried only when PFC is configured to work in auto mode for interconnection between products and between NICs.

Translation
Download
Updated: 2019-08-09

Document ID: EDOC1000041694

Views: 57964

Downloads: 3621

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next