No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


CX11x, CX31x, CX710 (Earlier Than V6.03), and CX91x Series Switch Modules V100R001C10 Configuration Guide 12

The documents describe the configuration of various services supported by the CX11x&CX31x&CX91x series switch modules The description covers configuration examples and function configurations.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Introduction to TRILL

Introduction to TRILL

This section describes the definition and purpose of TRILL.


Transparent Interconnection of Lots of Links (TRILL) is a protocol that applies Layer 3 link-state routing technologies to Layer 2 networks. The TRILL protocol extends Intermediate System to Intermediate System (IS-IS) to Layer 2 to build large Layer 2 networks for data centers, providing solutions for data center services.


In the cloud computing era, a data center usually uses a distributed architecture for mass data storage, query, and search services. In this architecture, cluster computing between servers generates heavy east-west traffic. As virtualization technologies are widely used in cluster computing, each server needs to compute much more data than before, and therefore throughput of a physical server increases by multiple times. In addition, virtual machines (VMs) must be able to dynamically migrate within a data center, to improve service reliability, reduce costs of IT services and network operation and maintenance, and allow for more flexible service deployment.

Because of these characteristics of cloud-computing data centers, the traditional hierarchical network structure with Layer 2 access (xSTP) and Layer 3 aggregation/core (routing) cannot satisfy requirements of data centers. Currently, a large Layer 2 fat tree architecture is widely used in data centers. TRILL helps build a non-blocking large Layer 2 network that supports smooth VM migration and can adapt to increasing network scales. The following table describes the advantages of a TRILL network over a traditional network using xSTP and Layer 3 routing protocols.

Table 10-1 Comparison between TRILL and xSTP networks

Requirements of Cloud Computing Data Center Networks


TRILL Network

xSTP Network

Smooth VM migration

As one of core cloud computing technologies, server virtualization has been widely used. To maximize service reliability, reduce costs of IT services and network operation and maintenance, and allow flexible service deployment in a data center, VMs must be able to dynamically migrate within the data center but not just on an aggregation or access switch.

Deployed on a large Layer 2 network, TRILL supports dynamic VM migration in the entire data center.

In a traditional network with Layer 2 xSTP access and Layer 3 IP routing, the IP address of a VM will change if the VM migrates to another network segment. Therefore, VMs can only migrate within the same network segment.

Non-blocking, low-delay data forwarding

In a cloud-computing data center, most of traffic is east-west traffic, which is different from the traffic model on traditional carrier networks. Non-blocking, low-delay forwarding is required on data center networks to ensure normal service operation.

On a TRILL network, each device uses the shortest path tree (SPT) algorithm to calculate the shortest paths from itself to all the other nodes. If multiple equal-cost links are available, load balancing can be implemented among the unicast forwarding entries. Load balancing among equal-cost paths fully uses network bandwidth and implements line-speed forwarding on each node.

Redundant links are blocked and traffic is forwarded over a single path, which greatly wastes bandwidth and hinders construction of a non-blocking network.

Large network scale

In the cloud computing era, a large data center may need to support as many as millions of servers. To implement non-blocking forwarding, hundreds or thousands of switches must be deployed on the data center network, and therefore loop prevention protocols are required. When a network node or link fails, fast network convergence must be triggered to quickly restore services. In addition, network maintenance must be simple enough to facilitate service deployment.

  • Network scale: supports about 1000 switches theoretically.
  • Loop prevention: uses loop-free IS-IS on the control plane, so no loop exists.
  • Convergence rate: uses the IS-IS routing protocol to generate forwarding entries. Moreover, a TRILL header contains the Hop-Count field to allow temporary loops in a short period. These features implement subsecond convergence.
  • Network maintenance: requires only simple configuration. Many parameters such as nickname and system ID can be automatically generated, and most protocol parameters can retain their default values. You only need to maintain one protocol (TRILL) instead of managing unicast and multicast protocols separately.
  • Network scale: supports only about 100 devices because the network diameter cannot exceed 7 hops.
  • Loop prevention: blocks redundant ports to eliminate rings.
  • Convergence rate: completes convergence in seconds, due to limitations of the convergence mechanism.
  • Network maintenance: requires a heavy workload because multiple routing protocols such as IGP and PIM must be maintained on the network.


In the cloud computing era, a physical data center is shared by multiple tenants. Each tenant has a virtual data center instance, enabling tenants to exclusively use the server, storage, and network resources in the respective instances, while isolating data traffic of different tenant.

Currently, TRILL uses VLAN IDs to identify tenants and isolates traffic of tenants by VLANs. In the early stage of the cloud computing industry and large Layer 2 network operation, the limit of 4096 VLAN IDs will not become a bottleneck. Later, TRILL will use the FineLabel field to identify tenants. The FineLabel field is 24 bits and can support a maximum of 16M tenants, meeting requirements for future increase of tenants.

Only a maximum of 4096 tenants are supported, and the capacity cannot be expanded.


Data center networks must have high scalability to adapt to fast development of data centers.

A traditional xSTP-based Layer 2 network can be seamlessly connected to a TRILL-based large Layer 2 network. TRILL allows large network scales, fast convergence rates, and high scalability.

The network has small scale, slow convergence rate, and low scalability.


TRILL brings the following benefits:
  • Large Layer 2 data centers support non-blocking VM migration, which facilitates network management.
  • TRILL devices can be seamlessly connected to traditional xSTP networks, which reduces network upgrade costs.
Updated: 2019-08-09

Document ID: EDOC1000041694

Views: 57226

Downloads: 3617

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Previous Next