No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - DCN and Server Management

CloudEngine 8800, 7800, 6800, and 5800 V200R003C00

This document describes the configurations of Trill, FCoE, DCB, and NLB Server Cluster Association.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
TRILL Mechanism

TRILL Mechanism

On a TRILL network, RBs must complete the following steps to communicate with each other:
  1. Establishing TRILL Neighbor Relationships
  2. Synchronizing LSDBs
  3. Calculating Routes

Establishing TRILL Neighbor Relationships

TRILL devices send Hello packets (TRILL Hello PDUs) to establish neighbor relationships. Because of different port types, the Hello packets sent on broadcast and P2P links are different; however, the processes of establishing a neighbor relationship over these links are similar. Figure 1-9 illustrates the process of establishing a neighbor relationship between two RBs.
Figure 1-9 Process of establishing a TRILL neighbor relationship

The process of establishing a neighbor relationship between two RBs on a TRILL network is as follows:
  1. RB1 sends a TRILL Hello packet. After receiving the packet, RB2 detects that the neighbor field does not contain the local MAC address and sets the status of neighbor RB1 to Detect.
  2. RB2 replies with a TRILL Hello packet. After receiving the TRILL Hello packet, RB1 detects that the neighbor field contains the local MAC address and sets the status of neighbor RB1 to Report.
  3. RB1 sends another TRILL Hello packet to RB2. After detecting that the neighbor field contains the local MAC address, RB2 sets the status of neighbor RB1 to Report. The neighbor relationship has been set up between the two RBs.
  4. The two RBs periodically send Hello packets to each other to maintain the neighbor relationship. If an RB fails to receive a response from the other after sending three Hello packets consecutively, the RB considers the neighbor relationship invalid.
To improve the convergence rate and communication efficiency on broadcast networks, TRILL introduces the following mechanisms:
  • Electing a DRB

    On a broadcast network, each two RBs need to exchange information. If there are n RBs on the network, n x (n-1)/2 adjacencies need to be established. Once the status of any RB changes, information must be transmitted many times, resulting in a waste of bandwidth. To address the problem, TRILL defines a DRB. All the RBs send information only to the DRB, which then broadcasts the link status to the network. DRB election begins after the TRILL neighbor state turns to Detect.

    A DRB is elected according to the following rules:
    1. The RB whose interface has a higher DRB priority is elected as the DRB.
    2. If the interfaces on the two ends have the same DRB priority, the RB with a larger MAC address is elected as the DRB.
  • Specifying an AF

    When unknown unicast or multicast traffic passes a TRILL network, a loop may occur because traffic is broadcast in a VLAN. As shown in Figure 1-10, multicast traffic from user A is forwarded to the TRILL network by a Layer 2 switch. If RB1 and RB3 belong to the same VLAN, the multicast traffic is forwarded to both the two RBs, and therefore a loop occurs. An AF can be specified to solve this problem. An AF is elected for each CE VLAN by the DRB. Only the AFs can function as ingress and egress RBs, and non-AFs can only be transit RBs. If RB1 in Figure 1-10 is specified as the AF, user traffic is forwarded by RB1 and does not pass RB3, so loops will not occur.
    Figure 1-10 Networking for AF selection
    AFs are specified by the DRB. The DRB checks the CE VLANs enabled on the two ingress RBs on the TRILL network. The RB with the same VLAN ID as the user traffic is specified as the AF. If multiple RBs have the same VLAN ID as the user traffic, the AF is elected according to the following rules:
    1. The RB with the highest DRB priority is elected as the AF.
    2. If DRB priorities are the same, the RB with the largest MAC address is elected as the AF.
    3. If the MAC addresses are the same, the RB with the largest port ID is elected as the AF.
    4. If the port IDs are the same, the RB with the largest system ID is elected as the AF.
    NOTE:
    • An RB can be specified as an AF only when its connected TRILL ports are access or hybrid ports.
    • On a broadcast network, if two RBs have the same nickname, neither of them can be the AF.
    • If the DRB is changed, all AF information is deleted and a new AF is elected.
  • Specifying a DVLAN

    When multiple carrier VLANs exist on a TRILL network, a DVLAN must be specified on network interfaces of RBs to forward traffic. When sending TRILL protocol packets or forward TRILL data packets, RBs sets the VLAN ID in the Ethernet frame header to the DVLAN on the transmission link. A DVLAN can be manually configured or specified by a DRB.

Synchronizing LSDBs

After a DRB is elected, the LSDBs maintained by all RBs on the network are synchronized. An LSDB is the basis for generating a forwarding table. Therefore, LSDB synchronization is essential to correct data traffic forwarding on the network. The LSDB synchronization process varies depending on the network type.

  • Figure 1-11 shows the LSDB update process on a broadcast link.

    Figure 1-11 LSDB update on a broadcast link

    1. A newly added switch RB3 sends Hello packets to establish neighbor relationships with the other switches in the broadcast domain.
    2. After neighbor relationships are set up, RB3 sends its own LSP to the multicast address 01-80-C2-00-00-41. All neighbors on the network receive the LSP.
    3. The DRB in the network segment adds the LSP received from RB3 to its LSDB. After the CSNP timer expires, the DRB sends CSNPs to synchronize the LSDBs on the network. By default, CSNP timer is 10 seconds.
    4. After receiving a CSNP from the DRB, RB3 checks its LSDB and sends a PSNP to request for the LSPs it does not have.
    5. After receiving the PSNP, the DRB sends the requested LSPs to RB3. RB3 then synchronizes its LSDB with the LSPs. During LSDB update, the DRB performs the following operations:
      1. The DRB receives the LSP and searches for the matching record in the LSDB. If no matching record exists, the DRB adds the LSP to the LSDB and broadcasts the new LSDB.
      2. If the sequence number of the received LSP is greater than the sequence number of the corresponding LSP in the LSDB, the DRB replaces the local LSP with the received LSP, and broadcasts the new LSDB.
      3. If the sequence number of the received LSP is smaller than the sequence number of the corresponding LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      4. If the sequence number of the received LSP is the same as the sequence number of the corresponding LSP in the LSDB, the DRB compares the remaining lifetime of the two LSPs. If the remaining lifetime of the received LSP is smaller than the remaining lifetime of the corresponding LSP in the LSDB, the DRB replaces the local LSP with the received LSP and broadcasts the new LSDB. If the remaining lifetime of the received LSP is larger than the remaining lifetime of the LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      5. If the sequence number and the remaining lifetime of the received LSP are the same as those of the corresponding LSP in the LSDB, the DRB compares the checksum values of the two LSPs. If the checksum of the received LSP is larger than the checksum of the LSP in the LSDB, the DRB replaces the local LSP with the received LSP and broadcasts the new LSDB. If the checksum of the received LSP is smaller than the checksum of the LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      6. If the sequence number, remaining lifetime, and checksum of the received LSP are the same as those of the corresponding LSP in the LSDB, the DRB discards the LSP.
  • Figure 1-12 shows the LSDB update process on a P2P link.

    Figure 1-12 LSDB update on a P2P link

    1. After a P2P neighbor relationship is set up, RB1 and RB2 exchange CSNPs to synchronize their LSDBs. In the following example, RB1 sends a CSNP to RB2. If the LSDB on RB2 is not synchronized with the CSNP, RB2 sends a PSNP to request for the corresponding LSP.
    2. RB1 sends the required LSP to the neighbor. Meanwhile, it starts the LSP retransmission timer and waits for a PSNP from the neighbor as an acknowledgement of LSP reception. If RB1 does not receive the PSNP from the neighbor when the LSP retransmission timer expires, it resends the LSP.
    3. After receiving the PSNP from the neighbor, RB1 performs the following operations:
      1. If the sequence number of the received LSP is greater than the sequence number of the corresponding LSP in the LSDB, RB1 adds the LSP to its LSDB, and then sends a PSNP to acknowledge the received LSP. After that, RB1 sends the LSP to all its neighbors except the neighbor that sends the LSP.
      2. If the sequence number of the received LSP is smaller than the sequence number of the corresponding LSP in the LSDB, RB1 directly sends its LSP to the neighbor and waits for a PSNP from the neighbor.
      3. If the sequence number of the received LSP is the same as the sequence number of the corresponding LSP in the LSDB, RB1 compares the remaining lifetime of the two LSPs. If the remaining lifetime of the received LSP is smaller than the remaining lifetime of the LSP in the LSDB, RB1 replaces the local LSP with the received LSP, sends a PSNP, and sends the LSP to all neighbors except the neighbor that sends the LSP. If the remaining lifetime of the received LSP is larger than the remaining lifetime of the LSP in the LSDB, RB1 sends the local LSP to the neighbor and waits for a PSNP.
      4. If the sequence number and the remaining lifetime of the received LSP are the same as those of the corresponding LSP in the LSDB, RB1 compares the checksums of the two LSPs. If the checksum of the received LSP is larger than the checksum of the LSP in the LSDB, RB1 replaces the local LSP with the received LSP, sends a PSNP, and sends the LSP to all neighbors except the neighbor that sends the LSP. If the checksum of the received LSP is smaller than the checksum of the LSP in the LSDB, RB1 sends the local LSP to the neighbor and waits for a PSNP.
      5. If the sequence number, remaining lifetime, and checksum of the received LSP are the same as those of the corresponding LSP in the LSDB, RB1 discards the LSP.

Calculating Routes

When the LSDBs maintained by all the RBs on a TRILL network are synchronized (that is, convergence is implemented), each RB uses the shortest path first (SPF) algorithm to calculate the unicast and multicast forwarding tables based on the LSDB. The calculation process is as follows:
  • Generating a unicast routing table: Each RB uses itself as the root to generate the shortest paths to other nodes. Based on neighbor information, the RB obtains the outbound interface and next hop to each neighboring node, and generates a nickname unicast forwarding table according to the nickname information advertised by the neighbors.

  • Generating a multicast routing table: To facilitate multicast traffic transmission, more than one multicast distribution tree is generated on a TRILL network. The generation process is as follows:
    1. Electing a root RB: Based on the root priorities of the nicknames advertised by all the devices on the entire network and the number of distribution trees supported, each device obtains the nickname with the highest root priority and the smallest number of distribution trees. The RB whose nickname has the highest root priority is elected as the root RB. If RBs have the same root priority, the RB with a larger system ID is elected as the root RB.
    2. Electing a distribution tree root: The root RB can specify roots of multicast distribution trees. If no root is specified, N RBs with the highest nickname root priorities are selected as the roots.
    3. Calculating a shortest path tree (SPT): N roots are used as source nodes to calculate the shortest path tree to all the other nodes on the entire network.
    4. Generating a reverse path forwarding (RPF) check table: The RPF check table is created based on the spanning tree information advertised by each ingress RB. The RPF check table is used to prevent loops.
    5. Pruning the SPT: The SPT is pruned based on information advertised by each ingress RB.

NOTE:

Other nodes must have reachable unicast routes to the node with the highest nickname root priority. Therefore, unicast route calculation must be completed before multicast route calculation.

Translation
Download
Updated: 2019-05-08

Document ID: EDOC1100004349

Views: 31849

Downloads: 120

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next