Underlay Network Design
Network Management Zone Design
If an independent data center equipment room is present on a large or midsize campus network, software systems such as iMaster NCE-Campus can be directly installed on servers in the equipment room. During the installation, make sure that the egress gateway can communicate with the campus intranet. This section describes the basic server networking design for communication between these software systems and the campus intranet, as demonstrated in Figure 2-78.
- A stacked Layer 3 switch functions as the server gateway and is directly connected to software servers and the core switch cluster.
- iMaster NCE-Campus and iMaster NCE-CampusInsight use the minimum cluster size and two network planes.
- On the stacked Layer 3 switch, VLANs are created to isolate all network planes on the servers. The gateway interface of each network plane is the VLANIF interface of the given VLAN.
Server and Gateway Interconnection Design
To ensure network reliability, multiple NICs are bonded on servers, which are connected to the server gateway through one bonded interface. NIC bonding modes include active-backup and load balancing. The configurations for connecting servers to their gateway vary slightly in the two modes.
Active-backup mode
In this mode, one NIC interface in the bonded interface is in the active state, and the other is in the backup state. All data is transmitted through the active NIC interface. In the event of a failure on the link corresponding to the active NIC interface, data is transmitted through the backup NIC interface. In this case, the Layer 3 switch functioning as the server gateway connects to the two NIC interfaces on a server through two physical ports. The physical ports do not need to be aggregated, and are recommended to be added to the VLAN of the corresponding network plane in access mode. As shown in Figure 2-79, add physical ports (GE1/0/1 and GE2/0/1) on the switch to VLAN 100 using the following commands.
<Switch> system-view [Switch] vlan batch 100 [Switch] interface gigabitethernet 1/0/1 [Switch-GigabitEthernet1/0/1] port link-type access [Switch-GigabitEthernet1/0/1] port default vlan 100 [Switch-GigabitEthernet1/0/1] quit [Switch] interface gigabitethernet 2/0/1 [Switch-GigabitEthernet2/0/1] port link-type access [Switch-GigabitEthernet2/0/1] port default vlan 100 [Switch-GigabitEthernet2/0/1] quit
Load balancing mode
In this mode, multiple NICs of a server transmit data packets based on the specified hash policy. To enable server-switch interconnection, you need to configure an Eth-Trunk interface in manual mode on the Layer 3 switch functioning as the server gateway, connect the Eth-Trunk interface to the bonded interface on the server, and then add the Eth-Trunk interface to the VLAN of the corresponding network plane in access mode. As shown in Figure 2-80, add Eth-Trunk 1 on the switch to VLAN 100 using the following commands.
<Switch> system [Switch] vlan batch 100 [Switch] interface eth-trunk 1 [Switch-Eth-Trunk1] trunkport gigabitethernet 1/0/1 2/0/1 [Switch-Eth-Trunk1] port link-type access [Switch-Eth-Trunk1] port default vlan 100 [Switch-Eth-Trunk1] quit
Design for Communication Between the Network Management Zone and Campus Intranet
In the virtualization solution for large- and medium-sized campus networks, the network management zone needs to communicate with the device management subnet on the underlay network and the user subnet on the overlay network. This is to ensure that each software system can manage devices on the campus intranet and communicate with the user subnet to implement a service function. Table 2-46 lists the common software systems that need to communicate with the campus intranet in this solution.
Communication Type |
Software System |
Description |
---|---|---|
Communication with the device management subnet on the underlay network |
iMaster NCE-Campus |
Manages devices on the campus intranet, and configures and provisions services. Devices need to interconnect with iMaster NCE-Campus. |
iMaster NCE-CampusInsight |
Performs intelligent O&M on the campus intranet. Devices need to interconnect with iMaster NCE-CampusInsight and report performance data to it. |
|
Communication with the user subnet on the overlay network |
iMaster NCE-Campus |
Functions as the NAC server for user access authentication. The user subnet must be able to communicate with iMaster NCE-Campus. |
DHCP server |
Dynamically assigns IP addresses to user terminals. The user subnet must be able to communicate with the DHCP server. |
The network management zone adopts the basic networking design, the topology between the gateway in the network management zone and the core switch cluster is stable, and only a few network segments are required for communication. If this is the case, you are advised to configure static routes between the gateway in the network management zone and the core switch cluster. As illustrated in Figure 2-81, the planning of static routes is as follows:
- Two VLANIF interfaces are separately planned on the gateway in the network management zone as well as on the core switch. One (VLANIF 500 in the figure) is used for communication between the network management zone and the device management subnet on the underlay network, and the other (VLANIF 600 in the figure) for communication between the network management zone and the user subnet on the overlay network.
- For communication between the network management zone and the device management subnet on the underlay network:
- On the core switch: Configure a static route destined for the network management zone. The destination network segment is the network segment where the software systems (for example, iMaster NCE-Campus and iMaster NCE-CampusInsight in the figure) that need to communicate with the device management subnet resides. The next hop of the static route is the IP address of VLANIF 500 on the gateway in the network management zone.
- On the gateway in the network management zone: Configure a route destined for the device management subnet on the underlay network. The destination network segment is the device management network segment, and the next hop is the IP address of VLANIF 500 on the core switch.
- For communication between the network management zone and the user subnet on the overlay network:
- On the core switch: When creating network service resources for a fabric, configure the IP addresses of the connected network service resources as well as the VLANs and IP addresses for interconnecting with the gateway in the network management zone on the core switch that functions as the border node. After the configuration is complete, the core switch imports routes between the virtual routing and forwarding (VRF) instance that represents the network service resource and the VRF instance that represents a VN. In addition, the core switch creates a private static route destined for the network management zone in the VRF instance that represents the network service resource. The destination network segment of this static route is the network segment where the software system that needs to communicate with the user subnet resides, such as iMaster NCE-Campus or the DHCP server in the figure.
- On the gateway in the network management zone: Configure a static route destined for the user subnet on the overlay network. The destination network segment is the user network segment, and the next hop is the IP address of VLANIF 600 on the core switch.
Deployment Design
Deployment Configuration Mode Planning
Table 2-47 lists the network devices involved in the virtualization solution for a large or midsize campus network and the recommended deployment configuration modes.
Location |
Device |
Recommended Deployment Configuration Mode |
Description |
---|---|---|---|
Network management zone |
Switch (gateway in the network management zone) |
Local CLI or web system |
Generally, you need to configure the switch before installing software systems in the network management zone. |
Egress network |
Firewall |
Local CLI or web system |
Generally, firewalls are deployed in a core equipment room and have complex service configurations. Therefore, local configuration is recommended. |
Core layer |
Core switch |
Centralized configuration on iMaster NCE-Campus after going online on it through the CLI |
Generally, core switches are deployed in a core equipment room. After a core switch goes online on iMaster NCE-Campus through the CLI, it can be used as the root device of management subnets below the core layer to implement plug-and-play of devices below the core layer. |
Aggregation layer |
Aggregation switch (native WAC) |
Centralized configuration on iMaster NCE-Campus |
A large number of aggregation switches are deployed at scattered locations. You are advised to onboard aggregation switches on iMaster NCE-Campus through DHCP to implement plug-and-play deployment. When aggregation switches function as native WACs, you can log in to the web system or CLI of the aggregation switch to configure wireless services. |
Access layer |
Access switch |
Centralized configuration on iMaster NCE-Campus |
A large number of access switches are deployed at scattered locations. You are advised to onboard access switches on iMaster NCE-Campus through DHCP to implement plug-and-play deployment. |
AP |
Centralized configuration on the WAC |
Generally, the "WAC + Fit AP" architecture is used for the WLAN of a large or midsize campus network and APs are managed by the WAC in a centralized manner. A large number of APs are deployed at scattered locations. You are advised to onboard APs on the WAC through DHCP to implement plug-and-play deployment. |
Management VLAN Communication Design for Devices Below the Core Layer
In the distributed gateway solution, if the fabric uses the recommended networking of VXLAN deployed across core and aggregation layers and edge nodes that provide the native WAC function are used, policy association is deployed between aggregation switches (edge nodes) and access switches. In this way, an AP connected to an access switch can establish a CAPWAP tunnel with an edge node through the management VLAN for policy association and go online on the edge node. No additional management VLAN is required. For details about the AP join process design, see "AP Join Process Design" in WLAN Design.
Design of management VLAN communication for initial onboarding in plug-and-play mode
On a large or midsize campus network, you are advised to deploy devices below the core layer in plug-and-play mode through DHCP to onboard aggregation and access switches on iMaster NCE-Campus and APs on the WAC. Management VLAN communication is critical for onboarding devices below the core layer. Two methods are available:
- Use default VLAN 1 as the management VLAN, as shown in Figure 2-16.
- The core switch goes online on iMaster NCE-Campus through the CLI.
- On iMaster NCE-Campus, configure VLANIF 1 on the core switch as the gateway interface of the management subnet, configure a DHCP address pool, and configure DHCP Option 148 to carry the southbound IP address of iMaster NCE-Campus.
- By default, all interfaces on a device are added to VLAN 1 before delivery. Therefore, devices at the core, aggregation, and access layers can communicate with each other in VLAN 1.
- The aggregation and access switches obtain the southbound IP address of iMaster NCE-Campus through VLAN 1 and go online on iMaster NCE-Campus.
Figure 2-82 Using default VLAN 1 for plug-and-play deployment of devices below the core layer
- Use an auto-negotiated management VLAN, as shown in Figure 2-17.
If VLAN 1 is used as the management VLAN, broadcast storms may occur. To avoid this, you can enable management VLAN auto-negotiation to configure another VLAN as the management VLAN. In this example, VLAN 100 is the auto-negotiated management VLAN. The plug-and-play onboarding process of devices below the core layer is as follows:
- The core switch goes online on iMaster NCE-Campus through the CLI.
- On iMaster NCE-Campus, configure VLANIF 100 on the core switch as the gateway interface of the management subnet, configure a DHCP address pool, and configure DHCP Option 148 to carry the southbound IP address of iMaster NCE-Campus.
- Configure the core switch as the root device and use the management VLAN auto-negotiation function to enable management VLAN communication for devices below the core layer. The process is as follows:
- On iMaster NCE-Campus, enable the management VLAN auto-negotiation function on the core switch and configure VLAN 100 as the auto-negotiated management VLAN.
- After the core switch is configured, aggregation switches automatically add their interfaces to VLAN 100 through protocol packet auto-negotiation.
- After the management channels between the core and aggregation switches are established, access switches automatically add their interfaces to VLAN 100 through protocol packet auto-negotiation.
- The aggregation and access switches obtain the southbound IP address of iMaster NCE-Campus through VLAN 100 and go online on iMaster NCE-Campus.
Figure 2-83 Using an auto-negotiated management VLAN for plug-and-play deployment of devices below the core layer
Design of management VLAN switching after device onboarding
After a device goes online in plug-and-play mode for the first time, the interconnection interfaces on the device are added to a management VLAN. If there are a large number of network devices on the campus network, broadcast storms may occur even if an auto-negotiated management VLAN is used. In this case, you are advised to plan multiple management VLANs and switch management VLANs for devices after their initial onboarding in plug-and-play mode to separate broadcast domains of these devices.
Note: Before switching the management VLAN, add the interconnection interfaces on the core switch and devices below the core layer to the new management VLAN. In this way, devices below the core layer will not fail to go online due to communication failures with the core switch on the new management VLAN.
iMaster NCE-Campus-based Deployment Process Design
On a large or midsize campus network, there are a large number of switches and APs. Two iMaster NCE-Campus-based deployment roadmaps apply to this scenario, as shown in Figure 2-19. It is recommended that the administrator plan the network first, import planning information to iMaster NCE-Campus, and then configure device onboarding modes for deployment. This reduces the deployment workload. If the network cannot be planned in advance, the administrator can onboard devices and then determine the network topology.
Onboard devices and then determine the network topology
In this mode, configure device information such as ESNs on iMaster NCE-Campus first, bring devices online, and then configure physical links. If these devices need to be connected through aggregated links (Eth-Trunks), you can manually create such links. The process of deploying switches and APs in this mode is as follows:
- Create a site that represents the campus network.
- Enter device ESNs to add devices to this site. Enter AP ESNs and associate APs with the WAC on iMaster NCE-Campus.
- Configure device management to bring devices online.
- Connect the core switch to iMaster NCE-Campus through the CLI.
- Deploy aggregation and access switches in plug-and-play mode through DHCP and bring them online on iMaster NCE-Campus.
- In the distributed gateway solution, APs can go online on edge nodes (WACs) through a management VLAN for policy association.
Set up required switch stacks in advance, add the stacks to this site, and synchronize information such as stack IDs and priorities. Stacks can be manually configured before device management or added to a site by active detection of iMaster NCE-Campus after device management.
- After devices go online, you can manually create Eth-Trunk interfaces on the devices as required.
Import the network plan and then onboard devices
In this mode, you can plan the network first and then import the planned basic network information, including device ESNs, stack information, and Eth-Trunk information, to iMaster NCE-Campus in a batch using a network plan import template. In this way, the network topology can be pre-configured, reducing the deployment workload. The process of deploying switches and APs in this mode is as follows:
- Create a site that represents the campus network.
- Fill in the network plan import template to import the following device, stack, and link information to this site in a batch.
- Switch and AP information, including device ESNs, models, and roles
- Switch stack information, including stack system names, stack IDs, and priorities
- Switch Eth-Trunk information, including the upstream and downstream switch names, upstream and downstream physical member port numbers, and upstream and downstream Eth-Trunk interface names
- Configure device management to bring devices online.
- Connect the core switch to iMaster NCE-Campus through the CLI.
- Deploy aggregation and access switches in plug-and-play mode through DHCP and bring them online on iMaster NCE-Campus.
- In the distributed gateway solution, APs can go online on edge nodes (WACs) through a management VLAN for policy association.
After switches go online, iMaster NCE-Campus automatically delivers the imported Eth-Trunk configurations to the switches.
Automatic Intranet Route Orchestration Design
The virtualization solution for large- and medium-sized campus networks supports the automatic underlay route orchestration function. With this function, iMaster NCE-Campus can automatically configure OSPF routes, divide OSPF areas, and deliver interface configurations for access to core layer based on the physical network topology. The physical network topology is imported to iMaster NCE-Campus based on the configuration plan or automatically learned by iMaster NCE-Campus.
Automatic underlay route orchestration falls in to single-area orchestration and multi-area orchestration, as shown in Figure 2-86.
When there are fewer than 100 switches in a network area where routes need to be deployed on the underlay network, single-area orchestration is recommended.
- All switches between the border and edge nodes on the fabric support automatic orchestration of OSPF routes. These devices refer to all aggregation and core switches if VXLAN is deployed across the core and aggregation layers, and refer to all core, aggregation, and access switches if VXLAN is deployed across the core and access layers.
- All switches between the border and edge nodes on the fabric are planned in area 0.
- Different VLANIF interfaces are planned on all switches for interconnection through OSPF. The interconnected Layer 2 interfaces allow packets from the corresponding VLANs to pass through.
- When configuring a fabric, you need to create loopback interfaces on the switches that function as border and edge nodes for establishing BGP EVPN peer relationships. Routes on the network segments where the loopback interface IP addresses reside are also advertised to area 0.
- If a Layer 2 switch is required for interconnection between the border and edge nodes and performs transparent transmission between them, this Layer 2 switch cannot be the core or aggregation switch. (When adding a switch to a site on iMaster NCE-Campus, you can set the switch role.) After the automatic OSPF route orchestration function is enabled, interfaces connecting this Layer 2 switch to the border and edge nodes allow packets from the corresponding VLAN to pass through.
When there are more than 100 switches in a network area where routes need to be deployed on the underlay network, multi-area orchestration is recommended.
- All switches between the border and edge nodes on the fabric support automatic orchestration of OSPF routes. These devices refer to all aggregation and core switches if VXLAN is deployed across the core and aggregation layers, and refer to all core, aggregation, and access switches if VXLAN is deployed across the core and access layers.
- The core switch is planned in area 0. Each downlink VLANIF interface on the core switch, as well as the aggregation and access switches connected to these VLANIF interfaces are planned in the same area.
- Different VLANIF interfaces are planned on all switches for interconnection through OSPF. The interconnected Layer 2 interfaces are added to the corresponding VLANs in trunk mode.
- On the core switch that functions as a border node, routes on the network segment where its loopback interface IP address resides are advertised to area 0. On an edge node, routes on the network segment where its loopback interface IP address resides are advertised to the area to which the edge node belongs.
- If a Layer 2 switch is required for interconnection between the border and edge nodes and performs transparent transmission between them, this Layer 2 switch cannot be the core or aggregation switch. (When adding a switch to a site on iMaster NCE-Campus, you can set the switch role.) After the automatic OSPF route orchestration function is enabled, interfaces connecting this Layer 2 switch to the border and edge nodes allow packets from the corresponding VLAN to pass through.