CloudCampus Solution V100R019C10 Deployment Guide for Large- and Medium-Sized Campus Networks (Virtualization Scenario)

Overall Design

Overall Design

General Guidelines

Basic Design Guidelines

As a network infrastructure, campus networks provide users with network communications services and resource access rights. Complicated access relationships and diversified service types pose the needs for good design ideas and guidelines. The following design guidelines apply to campus network design:

  • Reliable: Campus networks must run stably and reliably without service interruptions, ensuring service experience. This requires a redundant or backup architecture for key components that can be used to quickly recover from faults.

  • Trustworthy: Campus networks must be secure and trustworthy to guarantee network and service security. This requires comprehensive security protection measures to prevent malicious damages, protecting data and network security.

  • Scalable: Campus networks must be able to be smoothly upgraded and expanded to meet service requirements in the next 3 to 5 years, fully create network values, reduce investment, and avoid waste of resources. This requires for on-demand deployment of new services and smooth network expansion.

  • Manageable: Campus networks must be easy to manage, maintain, and perform network diagnosis and fault locating, reducing the O&M difficulty and improving customer experience. This requires intelligent, proactive, and integrated management of multiple services over the entire network, real-time network health analysis, proactive prevention, and fast fault locating to reduce losses.

  • Operational: Campus networks must support flexible deployment of new services, such as Voice over Internet Protocol (VoIP), Unified Communications (UC), Telepresence, and desktop cloud.

  • Economic: The return on investment (ROI) should be maximized and the investment costs should be reduced.

Device Naming Conventions

It is recommended that a device name consist of multiple fields to accurately describe the physical location and role of the device on the campus network, as shown in Figure 2-2. To prevent a long device name, the abbreviation of each field is used, for example, A_B-4F_CSW_HW_S12700E-12_a.

Figure 2-2 Example of a device name
Table 2-3 Device naming conventions

Identifier

Description

A

Name of a campus site.

B

Physical location of a device on a campus network. Generally, the location is in the format of equipment room name + floor.

C

Role of a device on a campus network.

D

Device brand. For example, HW in the example indicates Huawei.

E

Device model.

F

Device number, which can be listed in alphabetical order or ascending order.

Interface Description Conventions

It is recommended that the description command be configured for physical interfaces to help you learn about interface connections. It is recommended that the description contain three parts, as shown in Figure 2-3.

Figure 2-3 Example of an interface description
Table 2-4 Interface description conventions

Identifier

Description

A

Direction of a local connection, for example, an interface connected to a downstream device.

B

Name of a peer device.

C

Interface of the peer device.

Network Architecture Design

Virtual Campus Network Architecture Overview

On a large or midsize campus network, the virtualization solution can be used to decouple services from the network, construct a multi-purpose network, and achieve flexible, fast service deployment without changing the basic network infrastructure. In this solution, the virtual campus network architecture poses requirements different from those on traditional network architecture. Figure 2-4 illustrates the virtual campus network architecture. The underlay is the physical network layer, and the overlay is the virtual network layer constructed on top of the underlay based on the Virtual Extensible LAN (VXLAN) technology.

Figure 2-4 Virtual campus network architecture

The overlay consists of the fabric and VN.

  • Fabric: a network with pooled resources abstracted from the underlay network. When creating an instantiated virtual network (VN), you can select the pooled network resources on the fabric.
    On a fabric network, VXLAN tunnel endpoints (VTEPs) are further divided into the following roles:
    • Border: border node of the fabric network. It corresponds to a physical network device and provides data forwarding between the fabric and external networks. Generally, VXLAN-capable core switches function as border nodes.
    • Edge: edge node of the fabric network, which corresponds to a physical network device. User traffic enters the fabric network from the edge node. Generally, VXLAN-capable access or aggregation switches are used as edge nodes.
  • VN: logically isolated virtual network instances (VN 1 and VN 2 in the figure) that are constructed by instantiating a fabric. One VN corresponds to one isolated network (service network), for example, R&D network.

Table 2-5 lists the resource pools on a fabric and how to invoke these resources during VN creation.

Table 2-5 Resource pools on a fabric and resource invoking methods during VN creation

Resource Pool on a Fabric

How to Invoke Resources in a Resource Pool During VN Creation

VN resource pool, which contains the number of VNs that can be created on an overlay.

Each time a VN is created, a VN resource is used.

VLAN resource pool, which is used in scenarios where terminals are connected to VNs and VNs communicate with external network resources. The VLAN resource pool is planned when configuring the fabric global resource pool.

When creating a user gateway in a VN, you can select a resource from the fabric global resource pool to configure a user VLAN.

BD/VNI resource pool, which is used when dividing Layer 2 broadcast domains in a VN and configuring corresponding VBDIF interfaces that function as the gateway interfaces of user subnets. The BD/VNI resource pool is planned when configuring the fabric global resource pool.

When a user gateway is created in a VN, resources in the BD/VNI resource pool are automatically invoked to create a BD and the corresponding VBDIF interface.

User access point resource pool, which is planned during access management configuration for a fabric. This resource pool includes the authentication modes that can be bound to access points.

When configuring user access in a VN, you can select planned access point resources.

Egress pool, which contains the external resources that can be used by VNs. Two types of external resources are created during fabric configuration:

  • External networks: used for VNs to communicate externally
  • Network service resources: used for VNs to communicate with the authentication server and DHCP server

When creating a VN, you can select external networks and network service resources.

Underlay Network Architecture Design

In the virtualization solution for large- and medium-sized campus networks, the physical network uses the network planning for traditional large- and medium-sized campus networks, and typically adopts a tree-type network architecture with the core layer as the root. This type of architecture features stable topology and is easy to expand and maintain. As illustrated in Figure 2-5, the campus network is composed of the access layer, aggregation layer, and core layer, as well as some functional zones. Modules in each functional zone are clearly defined, and the internal adjustment of each module is limited to a small scope, facilitating fault location.

Figure 2-5 Physical network in the virtualization solution for large- and medium-sized campus networks

Table 2-6 Physical network layers and functional zones

Name

Description

Terminal layer

The terminal layer involves various terminals that access the campus network, such as PCs, printers, IP phones, mobile phones, and cameras.

Access layer

The access layer provides various access modes for users and is the first network layer to which terminals connect. At the access layer, there are a large number of access switches that are sparsely distributed in different places. If wireless terminals are present at the terminal layer, wireless access points (APs) need to be deployed at the access layer and access the network through access switches.

Aggregation layer

The aggregation layer sits between the core and access layers. It forwards horizontal traffic (east-west traffic) between users and forwards vertical traffic (north-south traffic) to the core layer. The aggregation layer can also function as the switching core for a department or zone and connect the department or zone to a dedicated server zone. In addition, the aggregation layer can further extend the quantity of access terminals.

Core layer

The core layer is the core of data exchange on a campus network. It connects to various components of the campus network, such as the DC/network management zone, aggregation layer, and campus egress. The core layer is responsible for high-speed interconnection of the entire campus network. High-performance core switches need to be deployed to meet network requirements for high bandwidth and fast convergence upon network faults. It is recommended that the core layer be deployed for any campus with more than three departments.

Egress network

The campus egress is the boundary that connects a campus network to an external network. Internal users of the campus network can access the external network through the campus egress zone, and external users can access the internal network through the campus egress zone. Firewalls need to be deployed in the campus egress zone to provide perimeter security protection.

Network management zone

The network management zone is the server zone where the O&M and management systems are deployed. In the virtualization solution for large- and medium-sized campus networks, the following systems are deployed:

  • iMaster NCE-Campus: campus network automation engine. It is used to provision service configurations for network devices; provides open APIs for integration with third-party platforms; and can function as an authentication policy server to deliver authentication, authorization, accounting (AAA) and free mobility services.
  • iMaster NCE-CampusInsight: intelligent campus network analytics engine, which provides intelligent O&M services by utilizing Telemetry, big data, and intelligent algorithms.
  • DHCP server: dynamically assigns IP addresses to user clients.

Hierarchical Physical Network Architecture Planning

In practice, you can flexibly select the two-layer or three-layer architecture for the physical network based on the network scale or service requirements, as shown in Figure 2-6.

The campus network involving one building usually uses the two-layer architecture, that is, only the access layer and aggregation layer are required. A large-scale campus network (such as a university campus network) that involves multiple buildings usually uses the three-layer architecture that consists of the access, aggregation, and core layers.

Figure 2-6 Hierarchical physical network architecture

During network design, you can use the bottom-up method to determine the layered architecture depending on the network scale, as illustrated in Figure 2-7.

Figure 2-7 Layered network architecture design

In the preceding calculations, the calculation results need to be rounded up.

Overlay Network Architecture Design

The overlay network architecture design is to design the fabric network. As demonstrated in Figure 2-8, a fabric network can adopt a two-layer or three-layer network architecture based on physical network layers. The three-layer network architecture supports two VXLAN networking types: VXLAN deployed across core and aggregation layers and VXLAN deployed across core and access layers.

Figure 2-8 Fabric networking

In the centralized gateway solution, different fabric networking types can be selected based on the following campus network scenarios:

  • Campus network reconstruction: VXLAN is recommended to be deployed across core and aggregation layers. In such scenario, existing low-end access switches that do not support VXLAN can be used.
  • New campus network deployments: VXLAN is recommended to be deployed across core and access layers. In such scenario, virtualization can be deployed on the entire network to implement automatic overlay deployment.

Network Resource Planning

VLAN/BD Planning

BD Resource Planning

In a VN, a Layer 2 broadcast domain is constructed based on bridge domains (BDs). In a BD, user terminals in different geographical locations can communicate with each other. In the virtualization solution for large- and medium-sized campus networks, BD resource planning guidelines are as follows:

  • 1:1 mapping between BDs and user service VLANs is recommended, as shown in Figure 2-9.
  • In a VN, each time a VXLAN user gateway is created, a BD is automatically invoked from the global BD resource pool of the fabric in sequence. You do not need to consider how to divide a BD. Instead, you only need to consider how to assign user service VLANs.
  • BD resources in the BD resource pool must be sufficient to support user service VLAN assignment.
Figure 2-9 Association between a BD and a service VLAN

VLAN Resource Planning

In the virtualization solution for large- and medium-sized campus networks, a large Layer 2 broadcast domain can be constructed based on BDs. However, user terminals still connect to campus networks through VLANs, and the VLANs are bound to BDs. In addition, campus networks need to be interconnected through VLANs. The virtualization solution for large- and medium-sized campus networks complies with the VLAN planning guidelines of traditional campus networks:

  • Assign VLANs based on service zones.
  • Assign a VLAN to each service type.
  • Allocate consecutive VLAN IDs to ensure proper use of VLAN resources.
  • Reserve a specific number of VLANs for future use.

VLANs are classified into service, management, and interconnection VLANs. For details about the VLAN planning recommendations, see Table 2-7.

Table 2-7 VLAN resource planning recommendations

Category

Recommendations

Service VLAN

Assign VLANs based on logical areas, organizational structures, and service types.

  • Assign VLANs based on logical areas. For example, VLANs 200 to 999 are used in the server zone, and VLANs 2000 to 3499 are used on the access network.
  • Assign VLANs based on organizational structures. For example, department A uses VLANs 2000 to 2499, and department B uses VLANs 2500 to 2999.
  • Assign VLANs based on service types. For example, employees in department A use VLANs 2000 to 2099, and dumb terminals in department A use VLANs 2100 to 2199.

Management VLAN

A management VLAN is used to manage network devices on the campus network. The following describes the management VLAN planning for network devices at different layers and in different functional zones:

  • Servers in the network management zone: If the network is not a data center network and has a small number of servers, it is recommended that all servers be added to the same management VLAN.
  • Switches in the network management zone: If the network is not a data center network and has a small number of switches, you are advised to use physical management interfaces to manage these switches, removing the need to plan a management VLAN.
  • Egress network devices: You are advised to use Layer 3 service interfaces as management interfaces, removing the need to plan a management VLAN.
  • Core switches: You are advised to plan a separate management VLAN and use the VLANIF interface of the management VLAN as the management interface. iMaster NCE-Campus manages core switches through the management interface.
  • Devices below the core layer: You are advised to plan one or more management VLANs based on the device scale and use the VLANIF interface of the management VLAN as the management interface. iMaster NCE-Campus manages devices through the management interface.
    • If a small number of devices are deployed, it is recommended that all aggregation switches, access switches, and APs use the same management VLAN.
    • If a large number of devices are deployed, it is recommended that all aggregation switches and access switches use the same management VLAN and all APs use the same management VLAN.
    • If a large number of devices are deployed, you are advised to plan device groups based on network layers. Each device group uses the same management VLAN. For example, each aggregation switch and its connected downstream devices are grouped into a device group and use the same management VLAN.

Interconnection VLAN

In the virtualization solution for large- and medium-sized campus networks, VLANs are required for interconnection on both the underlay and overlay networks.

  • Underlay network: involves the VLAN for interconnection between core switches and the network management zone (in most cases, the interconnection VLAN is the management VLAN of core switches), VLAN for interconnection between egress devices and other devices, and VLAN for interconnection between devices at the core layer and lower layers (used for automatic OSPF route orchestration).
  • Overlay network: involves the VLAN for interconnection between the core switches functioning as border nodes and external networks and the VLAN for interconnection between border nodes and network service resources.

In Access Control Design, if policy association is required between the authentication control point and authentication enforcement point, you need to plan a management VLAN for policy association to establish a Control and Provisioning of Wireless Access Points (CAPWAP) tunnel between the authentication control point and authentication enforcement point.

IP Address Planning

IP address planning should comply with the following guidelines:

  • Uniqueness: Each host on an IP network must have a unique IP address. Even if the Multiprotocol Label Switching (MPLS) or Virtual Private Network (VPN) is used, it is recommended that different virtual routing and forwarding (VRF) instances use different IP addresses.
  • Contiguousness: Node addresses of the same service must be contiguous to facilitate route planning and summarization. Contiguous addresses facilitate route summarization, reducing the size of the routing table and speeding up route calculation and convergence. An aggregation switch may connect to multiple network segments. When allocating IP addresses, ensure that routes of these network segments can be summarized to reduce the number of routes on core devices.
  • Scalability: IP addresses need to be reserved for devices at each layer. When the network is expanded, no address segments or routes need to be added.
  • Easy maintenance: Device and service address segments need to be clearly distinguished from each other, facilitating subsequent statistics collection, monitoring and security protection based on address segments. If an IP address is planned properly, you can determine the device to which the IP address belongs. IP address planning and VLAN planning can be associated. For example, the third byte of an IP address can be the same as the last three digits of a VLAN ID, which is easy to remember and facilitates management.
  • It is recommended that internal hosts on a campus network use private IP addresses, and NAT devices be deployed at the campus egress to translate private IP addresses into public IP addresses so that internal hosts can access public networks. A few devices in the DMZ and the Internet zone use public IP addresses.

IP addresses on a campus network include management IP addresses, interconnection IP addresses, service IP addresses, and loopback interface addresses, as shown in Table 2-8.

Table 2-8 IP address planning recommendations

Category

Recommendation

Management IP address

Used for communication with iMaster NCE-Campus or for a local login. It is recommended that devices in the same management VLAN use IP addresses on the same IP address segment. The management IP addresses of network devices at different layers and in different functional zones are planned as follows:

  • Management IP addresses of servers in the network management zone, which are configured on the iBMC management interface
  • Management IP addresses of switches in the network management zone
  • Management IP addresses of egress devices. It is recommended that a service interface be used as the management interface, removing the need to plan the management interface separately.
  • Management IP addresses of core switches
  • Management IP addresses of devices below the aggregation layer

Interconnection IP address

An interconnection address is the IP address of an interface connected to another device's interface. It is recommended that the IP address with a 30-bit mask be used as an interconnection address. A core device uses a smaller IP address. An interconnection address is usually summarized and advertised. During IP address planning, consider the use of contiguous IP addresses that can be summarized. In the virtualization solution for large- and medium-sized campus networks, IP address interconnection is required on both the underlay and overlay networks.

  • Underlay network: includes the IP addresses for interconnection between core switches and the network management zone, IP addresses for interconnection between egress devices and other devices, and IP addresses for interconnection between devices at the core layer and devices at lower layers (for automatic OSPF route orchestration). Generally, an interconnection VLANIF interface is the management VLANIF interface of a core switch.
  • Overlay network: includes the IP addresses for interconnection between the core switches functioning as border nodes and external networks and IP addresses for interconnection between the core switches functioning as border nodes and network service resources.

Service IP address

A service address is the IP address of a server, service terminal, or gateway. You are advised to use the same last digits as the gateway address. For example, gateways use IP addresses suffixed by .254. The IP address range of each service must be clearly distinguished. The IP addresses of each type of service terminals must be contiguous and can be summarized. Considering the scope of a broadcast domain and easy planning, it is recommended that an address segment with a 24-bit mask be reserved for each service. If the number of service terminals exceeds 200, an extra address segment with a 24-bit mask is assigned.

Loopback interface address

A loopback interface address is often specified as the source address of packets to improve network reliability. The virtualization solution for large- and medium-sized campus networks uses the VXLAN technology. The control plane of VXLAN uses BGP EVPN for interaction, requiring loopback interfaces to be used to establish BGP peer relationships between VTEPs (border or edge nodes).

In Access Control Design, if policy association is required between the authentication control point and authentication enforcement point, you need to plan a management VLAN for policy association to establish a CAPWAP tunnel between the authentication control point and authentication enforcement point.

DHCP Planning

In the virtualization solution for large- and medium-sized campus networks, the DHCP function is required for management subnet design during device deployment and for user subnet design in a VN, as shown in Figure 2-10.

Figure 2-10 Application scenario for DHCP

DHCP Planning for the Management Subnet

A large or midsize campus network often has a large number of devices below the core layer. During device deployment, you are advised to plan a DHCP server dedicated to management address allocation of devices. DHCP planning for the management subnet is as follows:

  • It is recommended that a core switch be used as the DHCP server of the management subnet and an IP address pool be configured on the gateway interface of the management subnet.
  • Configure DHCP Option 148 to contain iMaster NCE-Campus address information.
  • If the gateway interface of the management subnet also functions as the gateway interface of the AP management subnet, you are advised to configure DHCP Option 43 to contain WAC address information.

DHCP Planning for the User Subnet

It is recommended that a separate DHCP server be planned for large- and medium-sized campus networks to allocate IP addresses to user terminals. DHCP planning for the user subnet is as follows:

  • It is recommended that a DHCP server be planned for the entire campus network to simplify O&M.
  • In most cases, the DHCP server and hosts on a large or midsize campus network are on different network segments. You are advised to enable the DHCP relay function on user gateways.
  • You are advised to configure DHCP snooping in the BD where a user gateway belongs to ensure that user terminals obtain IP addresses from a valid DHCP server and prevent attacks. In addition, if DHCP options are used for terminal identification, DHCP snooping also needs to be configured.
  • In dynamic IP address allocation provided by DHCP, the lease period of IP addresses needs to be planned based on the online duration of user terminals. In large- and medium-sized campus networks, the online duration of user terminals in the office area are long, so a long lease period needs to be planned for IP addresses of these user terminals.

    If a fixed IP address needs to be allocated to a specific user terminal, this IP address must be excluded from the DHCP address pool during DHCP address pool planning. This is to prevent this IP address from being allocated.

Routing Protocol Planning

In the virtualization solution for large- and medium-sized campus networks, routing protocols need to be deployed on both the underlay and overlay networks to implement different Layer 3 interconnection requirements, as shown in Figure 2-11. Table 2-9 describes the scenarios where routing protocols need to be deployed on the underlay and overlay networks and routing protocol planning.

Figure 2-11 Routing protocol planning in the virtualization solution for large- and medium-sized campus networks
Table 2-9 Routing protocol planning in different scenarios of the virtualization solution for large- and medium-sized campus networks

Routing Protocol Deployment Scenario

Route Planning in Different Scenarios

Description

1. Communication between the network management zone and device management subnet

  • Routing protocols are used for communication between iMaster NCE-Campus and network devices to implement configuration, management, and intelligent O&M on these devices.
  • If the network management zone is a simple network where software systems such as iMaster NCE-Campus are installed, you are advised to configure static routes on the gateway in the network management zone and the core switch.
  • If the network management zone is a data center, you need to flexibly configure routes for the gateway in the network management zone and the core switch based on the routing protocol used by the data center network.

2. Communication between core, aggregation, and access switches

  • Routing protocols are mainly used for Layer 3 communication on the underlay network as the basis for overlay network deployment.
  • Routing tables can be dynamically updated based on the network topology. Therefore, it is recommended that an IGP, such as OSPF, be planned. It is recommended that OSPF routes be automatically orchestrated through iMaster NCE-Campus, removing the need to manually configure routes.
  • Automatic OSPF route orchestration enables devices to import the network segments of BGP source interfaces (such as loopback interfaces) into the OSPF routing domain so that the BGP source interfaces can communicate with each other.

3. Communication on the VXLAN control plane

  • In most cases, BGP is deployed between border and edge nodes and between edge nodes. BGP EVPN is used to implement functions on the VXLAN control plane, including dynamic VXLAN tunnel establishment, ARP entry transmission, and routing information transmission.
  • It is recommended that a border node be configured as a route reflector (RR) to simplify BGP peer configuration.

4. Inter-VN communication

  • Routing protocols are used on a border node for communication between different user subnets of VNs.
  • Different user subnets of VNs can also communicate with each other through firewalls.
  • To implement such communication on a border node, configure the border node through iMaster NCE-Campus. After the configuration is complete, the border node uses BGP to import routes to user subnets between VRF instances of different VNs.

5. Communication between a VN and an external network

  • Routing protocols are used by devices on user subnets of a VN to access the Internet and WANs.
  • If two firewalls implement HSB using VRRP, one as the VRRP master and the other VRRP backup, static routes are recommended between the firewall and border node. If VRRP is not deployed on firewalls, dynamic routes need to be used between the firewall and border node. During active/standby firewall switchover, dynamic routes can be used to implement automatic switching of the service traffic paths.
  • On the border node, you can create external network resources for the fabric through iMaster NCE-Campus and configure an interface and routing protocol for interconnection with the firewall.

    After the VN selects external network resources, the border node imports routes between the VRF instance of the external network resources and the VRF instance of the VN.

  • You can log in to the web UI or CLI of the firewall to configure the firewall.

6. Communication between a VN and the network management zone

  • Routing protocols are used for communication between user subnets in a VN and network service resources such as the DHCP server and network access control (NAC) server.
  • Static routes are recommended.
  • Network service resources of the fabric are created for the border node through iMaster NCE-Campus. During the creation, the interface for connecting the border node to the network management zone is configured, and a static route to the network service resources is automatically created.

    After a VN selects a network service resource, the border node imports routes between the VRF instance of the network service resource and the VRF instance of the VN.

  • You can log in to the web UI or CLI of a gateway in the network management zone to configure the gateway.