No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

CX11x, CX31x, CX710 (Earlier Than V6.03), and CX91x Series Switch Modules V100R001C10 Configuration Guide 12

The documents describe the configuration of various services supported by the CX11x&CX31x&CX91x series switch modules The description covers configuration examples and function configurations.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Principles

Principles

This section describes the principles of TRILL.

Basic Concepts

This section introduces basic concepts about Transparent Interconnection of Lots of Links (TRILL). Figure 10-1 shows basic roles in the typical TRILL networking.
Figure 10-1 Large-scale Layer 2 TRILL networking

Devices in TRILL Networking

RB

Router bridge (RB) is a Layer 2 switch running TRILL. RBs are classified into ingress RB, transit RB, and egress RB according to their locations on a TRILL network. An ingress RB indicates the ingress from which packets enter the TRILL network. A transit RB indicates the intermediate node through which packets pass on the TRILL network. An egress RB indicates the egress from which packets leave the TRILL network.

DRB

A designated routing bridge (DRB) is an RB that functions as a transit device and performs special tasks on TRILL networks. On a TRILL broadcast network, if two RBs are located on the same virtual local area network (VLAN), the RB whose interface with a higher DRB priority or larger MAC address is selected as the DRB when they are establishing neighbor relationships. The DRB communicates with each device on the network to synchronize all the link state databases (LSDBs) on the VLAN, sparing every two devices from communicating for LSDB synchronization. DRBs perform the following tasks:
  • Generate pseudonode link state protocol data units (LSPs) when more than two RBs exist on the network.
  • Send complete sequence number protocol data units (CSNPs) to synchronize LSDBs.
  • Select an carrier VLAN as the Designated VLAN, the DVLAN will transmit user packets and TRILL control packets.
  • Select the appointed forwarder (AF). Only one RB can function as the AF for a customer edge (CE) VLAN.

AF

An AF is an RB elected by the DRB to forward user traffic. Non-AF RBs cannot forward user traffic. As shown in Figure 10-1, loops may occur if a server is dual-homed to a TRILL network but does not have double network adapters working in load balancing mode. Therefore, an RB must be elected to forward user traffic.

VLANs on a TRILL Network
Table 10-2 VLANs on a TRILL network

VLAN Name

Description

Packet Supported

CE VLAN

A CE VLAN connects to the TRILL network and is usually configured on the edge devices of a TRILL network to generate multicast routes.

Native Ethernet packets

Carrier VLAN

A carrier VLAN transmits TRILL control packets and TRILL data packets. A maximum of three carrier VLANs can be configured on an RB. In the inbound direction, native Ethernet packets are encapsulated into TRILL packets in carrier VLANs. In the outbound direction, TRILL packets are decapsulated and restore to native Ethernet packets.

TRILL control packets and data packets

Designated VLAN

To combine or separate TRILL networks, multiple carrier VLANs are configured on a TRILL network. However, only one carrier VLAN is selected to forward TRILL control and data packets. The selected VLAN is called a designated VLAN.

TRILL control packets and data packets

Nickname

Each RB on a TRILL network has a unique nickname. The nickname is similar to an IP address in terms of function.

A nickname has one priority and one root priority.
  • When a nickname conflict occurs on a TRILL network, the priority determines which RB's nickname is to be advertised to other RBs.
    1. The RB with the highest priority advertises its nickname.
    2. If the RBs with the same nickname have the same priority, the RB with the largest system ID advertises its nickname.
  • An RB uses its root priority to run for the root of multicast tree. The RBs with the highest and second-highest root priority are selected as the roots of two multicast trees.
Interface Roles
Interfaces of switches on TRILL networks are classified into the following types:
  • Trunk interfaces: connect switches and transmit TRILL data packets and protocol packets only.
  • Access interfaces: transmit Native Ethernet packets and protocol packets only.
  • Hybrid interfaces: transmit both TRILL data and protocol packets and Native Ethernet packets by default.
  • P2P port: On a P2P network, the ports between two RBs are P2P ports. P2P ports are special trunk ports, and switches connected using the P2P ports do not participate in DRB election.

By default, the type of TRILL interfaces is p2p.

NET
Similar to IS-IS, TRILL uses network entity titles (NETs) to identify network layer information about switches. A NET includes the following elements:
  • Area ID: An area ID identifies an area. An IS-IS network has multiple areas, while a TRILL network has only one area. The TRILL area ID is 00.

  • System ID: identifies a host or switch and has a fixed length of 48 bits.

    In actual applications, a system ID can be automatically generated or configured. You can specify the system ID (unique on the entire network) when using the network-entity (TRILL) command to configure a NET. If this command is not configured, the system generates a system ID. The generated system ID is the same as the bridge MAC address of RB.

  • SEL (also referred to as NSAP Selector or N-SEL): The role of a SEL is similar to that of the protocol identifier of IP. Each transport protocol has one unique SEL. The SEL of TRILL is 00.

TRILL Packet Formats

TRILL packets include TRILL control packets and TRILL data packets.

TRILL Control Packets

TRILL switches exchange control packets to communicate with each other. This section describes major TRILL control packets used on a TRILL network. TRILL uses IS-IS as the control plane protocol, uses PDUs to process control information, extends IS-IS PDUs.

TRILL PDUs use the same format as IS-IS PDUs, except that TRILL PDUs use extended IS-IS TLVs. For details on fields in a TRILL PDU, see "IS-IS Configuration" in the CX11x&CX31x&CX91x Series switch modules Configuration Guide - IP Routing.

TRILL PDU Format
All PDUs used by TRILL can be classified into three types: Hello, LSP, and SNP. The first eight bytes are fixed in all TRILL PDUs, as shown in Figure 10-2.
Figure 10-2 TRILL PDU structure
The PDU fields are described as follows:
  • Intradomain Routing Protocol Discriminator: identifies a network-layer PDU.
  • Length Indicator: indicates the length of the fixed header.
  • ID Length: indicates the length of the intra-domain system ID.
  • PDU Type: indicates the PDU type.
  • Maximum Area Address: indicates the maximum number of area addresses allowed by TRILL. Currently, the TRILL area address can only be 00.
  • PDU Exclusive: varies depending on the PDU type and is described in the following PDU formats.
  • TLV: indicates the type/length/value, which varies depending on the PDU type.
TRILL Hello PDU Format

Hello PDUs are used to establish and maintain neighbor relationships. LAN Hello PDUs are used on broadcast networks and P2P Hello PDUs are used on non-broadcast networks. LAN Hello PDUs and P2P Hello PDU have different formats.

Figure 10-3 shows the LAN Hello PDU format.

Figure 10-3 LAN Hello PDU format

Figure 10-4 shows the P2P Hello PDU format.

Figure 10-4 P2P Hello PDU format

As shown in Figure 10-4, most fields in a P2P Hello PDU are the same as those in a LAN Hello PDU. The P2P Hello PDU does not have the priority and LAN ID fields, but has a local circuit ID field indicating the local link ID.

TRILL LSP PDU Format

Link state PDUs (LSPs) are used to exchange link-state information. Figure 10-5 shows the LSP PDU format.

Figure 10-5 TRILL LSP PDU format

TRILL SNP PDU Format
Sequence number PDUs (SNPs) describe the all or some of LSPs to synchronize and maintain all LSDBs. SNPs are classified into the following types:
  • Complete SNP (CSNP): carries summary of all LSPs in an LSDB, ensuring LSDB synchronization between neighboring switches. On a broadcast network, the DRB periodically sends CSNPs. The default interval for sending CSNPs is 10 seconds. On a point-to-point link, CSNPs are sent only when the neighbor relationship is established for the first time.

    Figure 10-6 shows the CSNP format.
    Figure 10-6 TRILL CSNP format

  • Partial SNP (PSNP): lists only the sequence numbers of recently received LSPs. A PSNP can acknowledge multiple LSPs at a time. If a device finds that its LSDB is not updated, it sends a PSNP to request a neighbor to send a new LSP.

    Figure 10-7 shows the PSNP format.
    Figure 10-7 TRILL PSNP format

TRILL Data Packets
Figure 10-8 shows the TRILL data packet format.
Figure 10-8 TRILL data packet format
A TRILL data packet is generated by adding a TRILL header and an outer Ethernet header to the original Ethernet packet. The fields in a TRILL header are described as follows:
  • Ethertype: fixed as TRILL.
  • V: version number, which is 0 currently. Each RB must check the version number when receiving a TRILL packet. If the version is incorrect, the RB discards the packet.
  • R: reserved for extension. This field is set to 0 on an ingress RB and ignored on transit and egress RBs.
  • M: multi-destination attribute. The value 0 indicates known unicast packets and the value 1 indicates multicast, broadcast, and unknown unicast packets.
  • Op-Length: length of the Options field. The value 0 indicates that the Options field is unavailable.
  • Hop-Count: used to prevent loops. When the Hop-Count field of a TRILL packet is set to 0, the RB discards the packet.
  • Egress RB Nickname: In a unicast packet, the field indicates the nickname of the egress RB. In a multicast packet, the field indicates the nickname of the multicast tree root used for forwarding.
  • Ingress RB Nickname: nickname of the ingress RB.
  • Options: This field is available only when the value of Op-Length is not 0.

For details on how TRILL data packets are forwarded on a TRILL network, see TRILL Forwarding Process.

TRILL Mechanism

On a TRILL network, RBs must complete the following steps to communicate with each other:
  1. Establishing TRILL Neighbor Relationships
  2. Synchronizing LSDBs
  3. Calculating Routes
Establishing TRILL Neighbor Relationships
TRILL devices send Hello packets (TRILL Hello PDUs) to establish neighbor relationships. Because of different port types, the Hello packets sent on broadcast and P2P links are different; however, the processes of establishing a neighbor relationship over these links are similar. Figure 10-9 illustrates the process of establishing a neighbor relationship between two RBs.
Figure 10-9 Process of establishing a TRILL neighbor relationship

The process of establishing a neighbor relationship between two RBs on a TRILL network is as follows:
  1. RB1 sends a TRILL Hello packet. After receiving the packet, RB2 detects that the neighbor field does not contain the local MAC address and sets the status of neighbor RB1 to Detect.
  2. RB2 replies with a TRILL Hello packet. After receiving the TRILL Hello packet, RB1 detects that the neighbor field contains the local MAC address and sets the status of neighbor RB1 to Report.
  3. RB1 sends another TRILL Hello packet to RB2. After detecting that the neighbor field contains the local MAC address, RB2 sets the status of neighbor RB1 to Report. The neighbor relationship has been set up between the two RBs.
  4. The two RBs periodically send Hello packets to each other to maintain the neighbor relationship. If an RB fails to receive a response from the other after sending three Hello packets consecutively, the RB considers the neighbor relationship invalid.
To improve the convergence rate and communication efficiency on broadcast networks, TRILL introduces the following mechanisms:
  • Electing a DRB

    On a broadcast network, each two RBs need to exchange information. If there are n RBs on the network, n x (n-1)/2 adjacencies need to be established. Once the status of any RB changes, information must be transmitted many times, resulting in a waste of bandwidth. To address the problem, TRILL defines a DRB. All the RBs send information only to the DRB, which then broadcasts the link status to the network. DRB election begins after the TRILL neighbor state turns to Detect.

    A DRB is elected according to the following rules:
    1. The RB whose interface has a higher DRB priority is elected as the DRB.
    2. If the interfaces on the two ends have the same DRB priority, the RB with a larger MAC address is elected as the DRB.
  • Specifying an AF

    When unknown unicast or multicast traffic passes a TRILL network, a loop may occur because traffic is broadcast in a VLAN. As shown in Figure 10-10, multicast traffic from user A is forwarded to the TRILL network by a Layer 2 switch. If RB1 and RB3 belong to the same VLAN, the multicast traffic is forwarded to both the two RBs, and therefore a loop occurs. An AF can be specified to solve this problem. An AF is elected for each CE VLAN by the DRB. Only the AFs can function as ingress and egress RBs, and non-AFs can only be transit RBs. If RB1 in Figure 10-10 is specified as the AF, user traffic is forwarded by RB1 and does not pass RB3, so loops will not occur.
    Figure 10-10 Networking for AF selection
    AFs are specified by the DRB. The DRB checks the CE VLANs enabled on the two ingress RBs on the TRILL network. The RB with the same VLAN ID as the user traffic is specified as the AF. If multiple RBs have the same VLAN ID as the user traffic, the AF is elected according to the following rules:
    1. The RB with the highest DRB priority is elected as the AF.
    2. If DRB priorities are the same, the RB with the largest MAC address is elected as the AF.
    3. If the MAC addresses are the same, the RB with the largest port ID is elected as the AF.
    4. If the port IDs are the same, the RB with the largest system ID is elected as the AF.
    NOTE:
    • An RB can be specified as an AF only when its connected TRILL ports are access or hybrid ports.
    • On a broadcast network, if two RBs have the same nickname, neither of them can be the AF.
    • If the DRB is changed, all AF information is deleted and a new AF is elected.
  • Specifying a DVLAN

    When multiple carrier VLANs exist on a TRILL network, a DVLAN must be specified on network interfaces of RBs to forward traffic. When sending TRILL protocol packets or forward TRILL data packets, RBs sets the VLAN ID in the Ethernet frame header to the DVLAN on the transmission link. A DVLAN can be manually configured or specified by a DRB.

Synchronizing LSDBs

After a DRB is elected, the LSDBs maintained by all RBs on the network are synchronized. An LSDB is the basis for generating a forwarding table. Therefore, LSDB synchronization is essential to correct data traffic forwarding on the network. The LSDB synchronization process varies depending on the network type.

  • Figure 10-11 shows the LSDB update process on a broadcast link.

    Figure 10-11 LSDB update on a broadcast link

    1. A newly added switch RB3 sends Hello packets to establish neighbor relationships with the other switches in the broadcast domain.
    2. After neighbor relationships are set up, RB3 sends its own LSP to the multicast address 01-80-C2-00-00-41. All neighbors on the network receive the LSP.
    3. The DRB in the network segment adds the LSP received from RB3 to its LSDB. After the CSNP timer expires, the DRB sends CSNPs to synchronize the LSDBs on the network. By default, CSNP timer is 10 seconds.
    4. After receiving a CSNP from the DRB, RB3 checks its LSDB and sends a PSNP to request for the LSPs it does not have.
    5. After receiving the PSNP, the DRB sends the requested LSPs to RB3. RB3 then synchronizes its LSDB with the LSPs. During LSDB update, the DRB performs the following operations:
      1. The DRB receives the LSP and searches for the matching record in the LSDB. If no matching record exists, the DRB adds the LSP to the LSDB and broadcasts the new LSDB.
      2. If the sequence number of the received LSP is greater than the sequence number of the corresponding LSP in the LSDB, the DRB replaces the local LSP with the received LSP, and broadcasts the new LSDB.
      3. If the sequence number of the received LSP is smaller than the sequence number of the corresponding LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      4. If the sequence number of the received LSP is the same as the sequence number of the corresponding LSP in the LSDB, the DRB compares the remaining lifetime of the two LSPs. If the remaining lifetime of the received LSP is smaller than the remaining lifetime of the corresponding LSP in the LSDB, the DRB replaces the local LSP with the received LSP and broadcasts the new LSDB. If the remaining lifetime of the received LSP is larger than the remaining lifetime of the LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      5. If the sequence number and the remaining lifetime of the received LSP are the same as those of the corresponding LSP in the LSDB, the DRB compares the checksum values of the two LSPs. If the checksum of the received LSP is larger than the checksum of the LSP in the LSDB, the DRB replaces the local LSP with the received LSP and broadcasts the new LSDB. If the checksum of the received LSP is smaller than the checksum of the LSP in the LSDB, the DRB sends the local LSP to the inbound interface.
      6. If the sequence number, remaining lifetime, and checksum of the received LSP are the same as those of the corresponding LSP in the LSDB, the DRB discards the LSP.
  • Figure 10-12 shows the LSDB update process on a P2P link.

    Figure 10-12 LSDB update on a P2P link

    1. After a P2P neighbor relationship is set up, RB1 and RB2 exchange CSNPs to synchronize their LSDBs. In the following example, RB1 sends a CSNP to RB2. If the LSDB on RB2 is not synchronized with the CSNP, RB2 sends a PSNP to request for the corresponding LSP.
    2. RB1 sends the required LSP to the neighbor. Meanwhile, it starts the LSP retransmission timer and waits for a PSNP from the neighbor as an acknowledgement of LSP reception. If RB1 does not receive the PSNP from the neighbor when the LSP retransmission timer expires, it resends the LSP.
    3. After receiving the PSNP from the neighbor, RB1 performs the following operations:
      1. If the sequence number of the received LSP is greater than the sequence number of the corresponding LSP in the LSDB, RB1 adds the LSP to its LSDB, and then sends a PSNP to acknowledge the received LSP. After that, RB1 sends the LSP to all its neighbors except the neighbor that sends the LSP.
      2. If the sequence number of the received LSP is smaller than the sequence number of the corresponding LSP in the LSDB, RB1 directly sends its LSP to the neighbor and waits for a PSNP from the neighbor.
      3. If the sequence number of the received LSP is the same as the sequence number of the corresponding LSP in the LSDB, RB1 compares the remaining lifetime of the two LSPs. If the remaining lifetime of the received LSP is smaller than the remaining lifetime of the LSP in the LSDB, RB1 replaces the local LSP with the received LSP, sends a PSNP, and sends the LSP to all neighbors except the neighbor that sends the LSP. If the remaining lifetime of the received LSP is larger than the remaining lifetime of the LSP in the LSDB, RB1 sends the local LSP to the neighbor and waits for a PSNP.
      4. If the sequence number and the remaining lifetime of the received LSP are the same as those of the corresponding LSP in the LSDB, RB1 compares the checksums of the two LSPs. If the checksum of the received LSP is larger than the checksum of the LSP in the LSDB, RB1 replaces the local LSP with the received LSP, sends a PSNP, and sends the LSP to all neighbors except the neighbor that sends the LSP. If the checksum of the received LSP is smaller than the checksum of the LSP in the LSDB, RB1 sends the local LSP to the neighbor and waits for a PSNP.
      5. If the sequence number, remaining lifetime, and checksum of the received LSP are the same as those of the corresponding LSP in the LSDB, RB1 discards the LSP.
Calculating Routes
When the LSDBs maintained by all the RBs on a TRILL network are synchronized (that is, convergence is implemented), each RB uses the shortest path first (SPF) algorithm to calculate the unicast and multicast forwarding tables based on the LSDB. The calculation process is as follows:
  • Generating a unicast routing table: Each RB uses itself as the root to generate the shortest paths to other nodes. Based on neighbor information, the RB obtains the outbound interface and next hop to each neighboring node, and generates a nickname unicast forwarding table according to the nickname information advertised by the neighbors.

  • Generating a multicast routing table: To facilitate multicast traffic transmission, more than one multicast distribution tree is generated on a TRILL network. The generation process is as follows:
    1. Electing a root RB: Based on the root priorities of the nicknames advertised by all the devices on the entire network and the number of distribution trees supported, each device obtains the nickname with the highest root priority and the smallest number of distribution trees. The RB whose nickname has the highest root priority is elected as the root RB.
    2. Electing a distribution tree root: The root RB can specify roots of multicast distribution trees. If no root is specified, N RBs with the highest nickname root priorities are selected as the roots.
    3. Calculating a shortest path tree (SPT): N roots are used as source nodes to calculate the shortest path tree to all the other nodes on the entire network.
    4. Generating a reverse path forwarding (RPF) check table: The RPF check table is created based on the spanning tree information advertised by each ingress RB. The RPF check table is used to prevent loops.
    5. Pruning the SPT: The SPT is pruned based on information advertised by each ingress RB.

NOTE:

Other nodes must have reachable unicast routes to the node with the highest nickname root priority. Therefore, unicast route calculation must be completed before multicast route calculation.

TRILL Forwarding Process

On a TRILL network, RBs send Hello packets to each other to establish neighbor relationships and send LSPs to synchronize LSDBs. After that, all RBs on a network have the same LSDB. Based on LSDB information, each RB then uses the SPF algorithm to calculate the shortest paths, outbound interfaces, and next hops to other RBs on the entire network. The RB uses the advised nickname information in the LSDB to generate a nickname forwarding table.

When user packets reach a TRILL network, they are forwarded in different processes based on the destination MAC addresses:
Process of Forwarding Known Unicast Traffic
Figure 10-13 illustrates how the known unicast traffic sent from server A to server C is forwarded.
Figure 10-13 Process of forwarding known unicast traffic
  1. The ingress RB (RB1) receives a Layer 2 packet from server A, and searches the Layer 2 forwarding table for the egress RB nickname matching the destination MAC address of the packet. After finding the egress RB nickname, RB1 looks up the unicast forwarding table to find the outbound interface L5 and next hop RB5 to the destination RB. RB1 then encapsulates the Layer 2 packet into a TRILL data packet and forwards the packet to the next hop through the outbound interface.
  2. When transit RB (RB5) receives the TRILL data packet, it obtains the egress RB nickname from the TRILL header and searches the unicast forwarding table for the egress RB nickname. Finding that the destination RB is RB6, RB5 forwards the TRILL data packet to RB6 through outbound interface L6.
  3. The egress RB (RB6) receives the TRILL data packet, and finds that the egress RB nickname in the TRILL header is its own nickname. Then RB6 decapsulates the TRILL packet to obtain the original Layer 2 data packet, and forwards the Layer 2 data packet through the matching outbound interface according to the destination MAC address of the packet.
Process of Forwarding Multicast, Broadcast, and Unknown Unicast Traffic
On a TRILL network, an RB calculates a distribution tree for each VLAN based on the LSDB to guide the forwarding of multicast, broadcast, and unknown unicast packets. Multicast packets are used as an example to describe the forwarding process, as shown in Figure 10-14.
Figure 10-14 Process of forwarding multicast traffic
  1. The ingress RB (RB1) receives a Layer 2 packet from server A, and finds that the destination MAC address is a multicast MAC address. RB1 selects the multicast distribution tree for the VLAN to which the packet belongs and encapsulates the Layer 2 packet in to a TRILL data packet. In the TRILL header, the M bit is set to 1, identifying a multicast packet. RB1 then looks up the multicast forwarding table according to the root RB nickname to find the outbound interface list, and forwards the TRILL packet to the outbound interface.
  2. The transit RB (RB4) receives the TRILL data packet, and checks the TRILL header. As the M bit in the TRILL header is 1, RB4 looks up the multicast forwarding table according to the egress RB nickname in the TRILL header, and forwards the multicast packet to the outbound interfaces in the matching forwarding entry.
  3. The root RB (RB3) receives the TRILL data packet, and distributes the packet to all the outbound interfaces.
  4. The egress RB (RB6) decapsulates the TRILL data packet to obtain the original Layer 2 data packet and forwards the packet through the matching outbound interface.

TRILL Multi-Homing Active-Active Access

Background

On a Transparent Interconnection of Lots of Links (TRILL) network, access devices (such as switches and servers) are often dual-homed to TRILL devices to enhance reliability. When one TRILL device fails, services are not interrupted.

In this scenario, if you associate the VLAN appointed forwarder (AF) or MSTP with TRILL to eliminate loops, servers must connect to the TRILL network through Layer 2 access switches. This access mode also requires link redundancy backup, wasting bandwidth. You can use TRILL multi-homing active-active access to enable servers with dual NICs to be directly dual-homed to a TRILL network and forward traffic simultaneously. This access mode ensures reliability and fully utilizes network bandwidth.

Figure 10-15 TRILL multi-homing active-active access networking

As shown in Figure 10-15, CE2 is dual-home to a TRILL network. On the access side, E-Trunk ensures device-level reliability and link-level reliability. The two routing bridges (RBs) use the same pseudo nickname and work like one logical device to access the TRILL network. You can deploy dynamic fabric service (DFS) on the RBs to associate E-Trunk and TRILL to ensure correct service packet forwarding.

Concepts

The following describes concepts of TRILL multi-homing active-active access.

  • Pseudo nickname

    A pseudo nickname is a replacement of an actual nickname to be encapsulated into a TRILL packet. A pseudo nickname can be generated in either of the following ways:
    • Manual configuration: You can configure the same or different pseudo nicknames for two access devices. If different pseudo nicknames are configured for two access devices, the low-priority pseudo nickname is suppressed and the high-priority one takes effect.
    • Automatic generation: Two access devices can negotiate a pseudo nickname.
  • Peer-link

    There must be a direct link between two devices where E-Trunk is deployed and the link must be a peer-link. A peer-link is a protection link.

    After an interface is configured as a peer-link interface, no services can be configured on the interface.

  • DFS group

    DFS is a platform that associates E-Trunk and TRILL to ensure correct service packet forwarding.

  • Link Aggregation Control Protocol (LACP) E-Trunk system priority

    LACP E-Trunk system priority is the LACP system priority of an E-Trunk member (Eth-Trunk).

    This priority indicates the priority of a device on either end of an Eth-Trunk in static or dynamic LACP mode. The device with a higher priority functions as the LACP actor, and the other device selects active interfaces according to the interface priorities of the LACP actor. The two devices then select the same active interfaces.

    NOTE:
    • LACP E-Trunk system priority applies to an E-Trunk that consists of Eth-Trunks in static or dynamic LACP mode.
    • LACP system priority applies to an Eth-Trunk in static or dynamic LACP mode.
    • LACP E-Trunk system priority and LACP system priority are configurable. If both priorities are configured, only the LACP E-Trunk system priority takes effect after an Eth-Trunk in static or dynamic LACP mode is added to an E-Trunk.
  • LACP E-Trunk system ID

    LACP E-Trunk system ID is the LACP system ID of an E-Trunk member (an Eth-Trunk interface).

    This system ID determines which device has a higher priority when two devices have the same LACP E-Trunk system priority. The smaller the system ID value, the higher the device priority.

    NOTE:
    • LACP E-Trunk system ID applies to an E-Trunk that consists of Eth-Trunks in static or dynamic LACP mode.
    • LACP system ID applies to an Eth-Trunk in static or dynamic LACP mode.
    • LACP E-Trunk system ID is configurable, but LACP system ID is fixed (as the Ethernet interface MAC address of the switch module).

    To make an E-Trunk on two RBs take effect, the two RBs must have the same LACP E-Trunk system priority and system ID.

  • E-Trunk priority

    The E-Trunk priority determines whether a device is the master or backup device in an E-Trunk application. The smaller the priority value, the higher the device priority.

Working Mechanism of Access-Side E-Trunk

In Figure 10-16, devices support the addition of Eth-Trunks in static LACP mode, dynamic LACP mode, or manual load balancing mode to an E-Trunk.

Figure 10-16 E-Trunk
  • Eth-Trunk and E-Trunk deployment

    • RB side

      Create E-Trunk with the same ID (E-Trunk 1) on RB1 and RB2 and add Eth-Trunks to the E-Trunks.

    • CE side

      If Eth-Trunks in LACP mode on RBs are added to the E-Trunks, configure an Eth-Trunk in static or dynamic LACP mode on the switch and add the physical interfaces connecting the switch to the RBs to the Eth-Trunk to realize link-level reliability.

      If Eth-Trunks in manual load balancing mode on RBs are added to the E-Trunks, configure an Eth-Trunk in manual load balancing mode on the switch, add the physical interfaces connecting the switch to the RBs to the Eth-Trunk, and configure Ethernet Operation, Administration, and Maintenance (OAM) on the switch and RBs to realize link-level reliability.

      The E-Trunks are invisible to the switch.

  • E-Trunk packet transmission

    The conditions for sending E-Trunk packets are as follows:
    • The configuration changes. For example, the E-Trunk priority changes, Eth-Trunks join or leave E-Trunks, or the E-Trunk local or remote nickname changed.

    • An Eth-Trunk fails or recovers from a fault.

  • Master/backup status of an E-Trunk

    The master/backup status of an E-Trunk is determined by the E-Trunk priority and system ID contained in a TRILL packet.

    An E-Trunk with a smaller priority value has a higher priority and is in master state. If two E-Trunks have the same priority, the one with a smaller system ID is in master state.

  • Master/backup status and link status of an Eth-Trunk

    Table 10-3 describes how the master/backup status and link status of an Eth-Trunk is determined when the network is running normally or network faults occur.

    Table 10-3 Master/backup status and link status of an Eth-Trunk

    Networking Diagram

    Master/backup status and link status of an Eth-Trunk

    Figure 10-17 Fault-free networking

    In a TRILL dual-homing and active-active access and fault-free scenario, the Eth-Trunk status is not affected by the E-Trunk master/backup status. All Eth-Trunks are the master/backup and in Up state; RB1 and RB2 work in load balancing mode.

    A peer-link can be considered as a broadcast domain in which all traffic is broadcast. The peer-link is unidirectionally separated from the E-Trunk, and TRILL network side to ensure a loop-free network.

    Figure 10-18 Networking with a peer-link fault
    When the peer-link is faulty, the E-Trunk master/backup status determines its member Eth-Trunk master/backup status.
    • If the E-Trunk status is master, its member Eth-Trunk status is also master and link status is Up.

    • If the E-Trunk status is backup, its member Eth-Trunk status is also backup and link status is Down. The TRILL dual-homing access scenario becomes a TRILL single-homing access scenario.

    Figure 10-19 Networking with a faulty node of which the E-Trunk status is master
    When the device on which an E-Trunk is deployed becomes faulty:
    • If the device is the master, the backup device becomes the master, member Eth-Trunk status is master, and link status is Up.

    • If the device is the backup, the E-Trunk master/backup status remains unchanged, its member Eth-Trunk status is backup, and link status is Down.

    According to the E-Trunk mechanism, the backup device becomes the master and can forward traffic. A TRILL dual-homing access scenario becomes a single-homing access scenario.

    Figure 10-20 Networking with an Eth-Trunk fault
    When the Eth-Trunk connecting to the TRILL network becomes faulty:
    • The E-Trunk master/backup status remains unchanged.
    • The master/backup status and link status of the Eth-Trunk in the faulty E-Trunk becomes backup and Down respectively.
    • Traffic is switched to another Eth-Trunk for forwarding.

    According to the E-Trunk mechanism, the faulty Eth-Trunk does not forward traffic. A TRILL dual-homing access scenario becomes a single-homing access scenario.

Working Mechanism of Network-Side TRILL

In a TRILL multi-homing active-active access solution, two RBs obtain pseudo nicknames through manual configuration or automatic negotiation and finally obtain the same pseudo nickname. RB1 and RB2 encapsulate actual nicknames into packets on non-active-active interfaces and encapsulate pseudo nicknames into packets on active-active interfaces.

In addition, RB1 and RB2 check whether their pseudo nicknames are the same. If not, the E-Trunk on the two RBs does not take effect. If so, the two RBs exchange their actual nicknames and MAC addresses. When the peer-link or one RB becomes faulty, TRILL notifies the fault to the other RB in a timely manner to make the active-active solution ineffective.

Figure 10-21 TRILL multi-homing active-active access networking

As shown in Figure 10-21, a peer-link is deployed between RB1 and RB2, and the two RBs have the same pseudo nickname. CE-2 together with RB1 and RB2 can implement a TRILL multi-homing active-active access solution. TRILL processes traffic of different types and from different directions differently. Table 10-4 describes how TRILL processes these traffic types.

Table 10-4 Traffic processing in a TRILL multi-homing active-active access scenario

This object indicates traffic type.

Traffic Model

Traffic Processing

Unicast traffic from a non-active-active interface, for example, CE-1

Figure 10-22 Unicast traffic from a non-active-active interface

RB1 forwards the traffic in a unicast manner.

Unicast traffic from an active-active interface

Figure 10-23 Unicast traffic from an active-active interface

RB1 and RB2 work in load balancing mode to forward the traffic together.

NOTE:

and represent different data flows.

Multicast traffic from a non-active-active interface, for example, CE-1

Figure 10-24 Multicast traffic from a non-active-active interface

RB1 encapsulates the actual nickname into the received multicast traffic and then forwards the traffic to each next-hop device. When the traffic arrives at RB2, RB2 forwards the traffic only to CE-3 but not to CE-2 or the TRILL network side according to the unidirectional isolation mechanism.

Multicast traffic from an active-active interface

Figure 10-25 Multicast traffic from an active-active interface

Multicast traffic from CE-2 is load balanced between RB1 and RB2. The following uses the forwarding process on RB1 as an example.

RB1 encapsulates the pseudo nickname into the received multicast traffic and then forwards the traffic to each next-hop device. When the traffic arrives at RB2, RB2 forwards the traffic only to CE-3 but not to CE-2 or the TRILL network side according to the unidirectional isolation mechanism.

Unicast traffic from the TRILL network side

Figure 10-26 Unicast traffic from the TRILL network side

If the unicast traffic is sent to an active-active interface, traffic is load balanced between RB1 and RB2 and then forwarded to the device that is dual-homed to the two RBs because the traffic is encapsulated with a pseudo nickname.

The following uses the traffic sent to CE-1 as an example. Because the traffic is encapsulated with the actual nickname, the traffic is directly sent to RB1 without being load balanced and then from RB1 to CE-1.

Multicast traffic from the TRILL network side

Figure 10-27 Multicast traffic from the TRILL network side

According to the TRILL multicast forwarding principles, each RB joins a different multicast tree. Therefore, only one RB (RB1 or RB2) processes the multicast traffic.

The following uses RB1 as an example. RB1 decapsulates the traffic and then forwards the traffic to each user-side interface. Because the peer-link is isolated from the backup interface, traffic arriving at RB2 is not forwarded to CE-2, avoiding routing loops.

TRILL NSR

During network operation, if an active/standby switchover is triggered due to a system failure or performed manually by the network administrator for software update or system maintenance, routes are interrupted and therefore traffic is lost. TRILL uses non-stop routing (NSR) to solve this problem.

NSR applies to devices with a backup control plane. This technology ensures that the control plane on a neighbor does not detect faults on the system control plane of the local device, preventing service interruption caused by an active/standby switchover.

NSR Implementation

TRILL NSR implements real-time data synchronization between the active and standby control planes as follows:

  • TRILL backs up configuration data and dynamic data (information about interfaces, neighbors, and LSDBs).

  • TRILL uses the RawLink socket to send and receive packets and does not back up the socket status.

  • TRILL does not back up routes and shortest path trees (SPTs) that can be restored using the source data in the backup process.

  • After an active/standby switchover occurs, the new master device restores the operation data and takes over services from the original master device. The neighbor is unaware of the active/standby process.

For detailed NSR description, see "NSR Configuration" in the CX11x&CX31x&CX91x Series switch modules Configuration Guide - Reliability.

Translation
Download
Updated: 2019-08-09

Document ID: EDOC1000041694

Views: 58493

Downloads: 3621

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next