A firewall is a collection of software and hardware deployed between different networks or network security zones. Firewalls are used to protect a network area against attacks and intrusions from other network areas.
Cluster Switch System (CSS) and Intelligent Stack (iStack) are stacking technologies. CSS is for modular switches and iStack is for fixed switches.
Stacking is a horizontal virtualization technology that virtualizes multiple switches at one layer into one logical switch.In Huawei CloudFabric DCN Solution, CE series switches function as gateways. CSS or iStack can be used to virtualize two gateways into one device.
- High reliability
You can implement redundancy backup among multiple member switches in a stack.
- Powerful network expansibility
You can use stacking to enhance port quantity, bandwidth, and processing capability without changing the network topology.
- Simplified configuration and management
You can log in to a stack from any member switch to manage and configure all the member switches in the stack. In addition, complicated Layer 2 ring protection protocols (such as MSTP) or Layer 3 protection switching protocols (such as VRRP) are not required after switches set up a stack; therefore, the network configuration is much simpler.
- High reliability
When one device or link is faulty, traffic is automatically switched to other devices or links.
- Simplified network and configuration
M-LAG is a horizontal virtualization technology that virtualizes two dual-homed devices into one device. M-LAG prevents loops on a Layer 2 network and implements redundancy without configuring the spanning tree protocol, simplifying the networking and configuration.
- Independent upgrade
Two devices can be upgraded independently, preventing service interruption when one device is upgrading.
Super Virtual Fabric (SVF) is a vertical virtualization technology that virtualizes switches at different layers into one logical switch. SVF satisfies high-density access requirements of DCs and simplifies network topologies and management.
- Parent switch acts as an MPU and is the core of the SVF system. It controls and manages the entire system.
- Leaf switch is an extended device that acts as a remote LPU of the parent switch. Leaf switches are centrally managed by the parent switch.
- SVF system consisting of fixed switches: The parent switch and leaf switches are all fixed switches.
- SVF system consisting of modular and fixed switches: A modular switch is deployed as the parent switch and fixed switches are deployed as leaf switches.
- Lower network construction costs
Cost-effective switches are used as access switches, so network construction costs are reduced.
- Simplified configuration and management
SVF virtualizes multiple devices into one, reducing the number of nodes to be managed. Complicated ring protection protocols are not required; therefore, the network configuration and management are much simpler.
- Higher scalability and more flexible deployment
When more access ports are required on the network, you only need to add cost-effective fixed switches to the network. Moreover, these cost-effective switches are deployed near servers, making network deployment more flexible.
- Source Network Address Translation (SNAT): In Huawei CloudFabric DCN Solution, firewalls implement SNAT for tenant network users to access the Internet.Tenant network users use private IP addresses and cannot access the Internet directly. SNAT translates their private source IP addresses to specific public IP addresses to allow them to access the Internet.Additionally, SNAT translates IP addresses and port numbers simultaneously, and maps multiple private IP addresses to a public IP address, which can be shared by multiple users.
- Destination Network Address Translation (DNAT): In Huawei CloudFabric DCN Solution, firewalls implement DNAT for Internet users to access tenant networks.DNAT maps the destination IP address of an Internet user to the private IP address of the destination tenant network user.
- Network virtualization edge (NVE)
NVEs are network entities that implement network virtualization. In Huawei CloudFabric DCN Solution, CE series switches and vSwitches can act as NVEs.
- VXLAN tunnel end points (VTEPs)
A VTEP is an end point of a VXLAN tunnel on an NVE device and is used to encapsulate and decapsulate VXLAN packets.
- VXLAN network identifier (VNI)
VNIs are similar to VLAN IDs and are used to identify VXLAN segments. A VNI represents a tenant. VMs with different VNIs cannot communicate with each other at Layer 2. The VNI field in a VXLAN packet header has enough bits to support a massive number of tenants.
- VXLAN tunnel
"Tunnel" is a logical concept. A VXLAN tunnel is set up between two VTEPs and is a virtual tunnel that transmits VXLAN packets.
- Supporting a large number of tenants
A VXLAN supports a maximum of 16M VXLAN segments with 24-bit VXLAN VNIs, so that a data center can accommodate numerous tenants.
- Improving device performance
VXLAN reduces the number of MAC addresses that network devices need to learn and enhances network performance because only devices at the edge of the VXLAN network need to identify VM MAC addresses.
- Reducing network management difficulties
VXLAN extends Layer 2 networks using MAC-in-UDP encapsulation and decouples physical and virtual networks. Tenants are able to plan their own virtual networks, without being limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management.
- L2 VXLAN gateway
A L2 VXLAN gateway, also called a VXLAN bridge, is used to transmit non-VXLAN traffic to the VXLAN network and to implement L2 communication on the VXLAN network.
- L3 VXLAN gateway
A L3 VXLAN gateway, also called a VXLAN router or VXLAN IP gateway, is used to implement communication between subnets on the VXLAN network.
Link Layer Discovery Protocol (LLDP) is a Layer 2 discovery protocol defined in the IEEE 802.1ab standard.LLDP collects local device information including the management IP address, device ID, and port ID and advertises the information to neighbors. Neighbors save the received information in their management information bases (MIBs). The network management system (NMS) can use data in MIBs to obtain Layer 2 network topology information quickly.In Huawei CloudFabric DCN Solution, you can enable LLDP on devices to allow the AC-DCN to obtain Layer 2 link information about the devices and their neighbors.
Link Layer Topology Discovery (LLTD) is a topology discovery protocol at the link layer. It enables automatic discovery of devices that are compatible with LLTD.
- If the AC-DCN needs to obtain the positions of Microsoft servers in a network topology, the AC-DCN delivers a Packet-out carrying LLTD information to the network device.
- The network device extracts the LLTD packet from the Packet-out, and broadcasts the LLTD packet on the network.
- When a Microsoft server receives the LLTD packet, it replies with another LLTD packet carrying its own host name, IP address, and MAC address to the network device.
- The network device encapsulates the LLTD packet into a Packet-in, and sends the Packet-in to the AC-DCN.
Cisco Discovery Protocol (CDP) is a proprietary Layer 2 network protocol developed by Cisco. This protocol is supported by most Cisco devices.By running CDP, Cisco devices can share information about OS versions, IP addresses, and hardware platforms with directly connected devices.
- North-south firewalls control mutual access between tenant network users and external networks.
- East-west firewalls control mutual access between different VPCs and different subnets in a VPC.
- Public virtual router (public vRouter):
- - Public vRouter is a dedicated VPN Routing and Forwarding (VRF) created on a gateway node to connect the gateway to public vFWs.
- - Public vRouter dynamically advertises EIP addresses allocated to VPCs to PEs, adds static routes for the EIP addresses, and sets the next hops of the static routes to the IP addresses of public vFWs to be connected.
- Tenant virtual router (tenant vRouter) is a dedicated VRF on a gateway node created for a tenant. A tenant vRouter connects tenant vFWs and PEs and applies to the scenario where firewalls are deployed in bypass mode. The default route of a tenant vRouter is destined for the VRF connected to PEs.
- Public virtual firewall (public vFW): is a special virtual system that exists on firewalls by default. After the virtualization function is enabled, the public vFW inherits the previous configurations on the firewalls. In Huawei CloudFabric DCN Solution, the public vFW serves as the summary routing domain of the vFWs of all VPCs. The default next hop of the vFWs of all VPCs is the public vFW, as shown in Figure 5-8.
- - The default next hop of the public vFW is the public vRouter.
- - When an EIP address has been allocated to a VPC, the public vFW sets up a static route to the vFW of the VPC. The destination IP address of the static route is the EIP address of the VPC, and the next hop of the static route is the IP address of a virtual interface on the vFW.
- Virtual system (VSYS): is a logical device divided on a firewall. Each VSYS works independently. In Huawei CloudFabric DCN Solution, the tenant VSYS serves as the summary routing domain of the VSYSs of all VPCs of a tenant for accessing remote intranets outside the data center. The next hop of the VSYSs of all VPCs of a tenant is the tenant VSYS, as shown in Figure 5-8.
- - All VPCs of a tenant share one tenant VSYS.
- - The default next hop of the tenant VSYS is the tenant vRouter.
EVPN can function as the VXLAN control plane by using inclusive multicast routes (IMRs) carried in the EVPN NLRI. VTEP IP addresses are stored in the Originating Router's IP Address field of an IMR.。
BGP-EVPN applies to VXLAN networks. In Huawei CloudFabric DCN Solution, BGP-EVPN is widely used in centralized and distributed VXLAN scenarios.
- Automatically establishes VXLAN tunnels
VXLAN standards do not define protocols for setting up tunnels. Manual configuration is inefficient and prone to errors. BGP-EVPN enables automatic information exchange for establishing VXLAN tunnels between devices.
- ARP broadcast suppression
You can use BGP-EVPN to advertise ARP routes to NVEs. After the configuration, when the gateway receives an ARP request, it will first check whether host information exists according to the destination IP address. If so, the gateway replaces the broadcast MAC address in the ARP request packet with a unicast MAC address, and converts the broadcast ARP packet into a unicast ARP packet.
- The spine gateways dynamically learn tenants' ARP entries and generate host information (including IP addresses, MAC addresses, VETP addresses, and VNI IDs) based on the entries.
- The leaf switches synchronize host information generated by the spine gateways through BGP-EVPN.
- When the leaf switches receive local ARP requests, they convert broadcast packets into unicast packets based on host information before forwarding packets.
Collects statistics about network parameters, such as the number of SYN events, traffic volume, average delay, and maximum delay, in real time.
Clearly displays the status of network devices and links.
Collects statistics about, analyzes, and displays TCP exception events (such as TCP retransmission, TCP RST, abnormal TTL, abnormal TCP flag, and abnormal TCP window size) on the network.
Uses the application map to clearly display the interaction relationship and quality of applications, and collects statistics on key parameters, such as the number of applications, number of hosts, average inter-application delay, and average intra-application delay.
Analyzes interaction relationships, events, statistics, and network delay trends between hosts in an application, and displays interaction exceptions.
Enables users to view flow event details, including the quintuple information, status (normal or abnormal), delay, traffic volume, all flow events (such as link establishment and disconnection events) of the current flow in a specified period, and topology paths used in specific events.
Uses the big data analysis method to detect threats, and accurately identifies and defends against APT attacks, preventing APT attacks from causing loss of core information assets.
Visualizes attack paths to show the attack process and path.
Provides intelligent retrieval to implement quick and accurate backtracking survey, facilitating users' investigation and evidence collection.
Supports security posture awareness to predict attack trends.
Provides network-wide intelligent collaboration and works with the SecoManager to implement on-demand traffic diversion and automatically delivers interception configurations based on analysis results.
On a VXLAN, microsegmentation provides grouping rules (for example, IP address or IP address segment) with finer granularity than subnets, and features simple deployment. Service isolation between servers can be implemented by grouping servers on the VXLAN into EPGs and deploying traffic control policies based on the EPGs.
Microsegmentation implements traffic control between servers by allocating the servers to EPGs and defining Group Based Policies (GBPs) between the EPGs.
As shown in the following figure, four servers are deployed on the same subnet of a VXLAN. The user requirements are as follows: Servers 1 and 3 can communicate with each other, servers 2 and 4 can communicate with each other, and communication between server 1 or 3 and server 2 or 4 is prohibited.
Microsegmentation can help implement the requirements by allocating servers 1 and server 3 to EPG 1 and servers 2 and 4 to EPG 2 and configuring intra-EPG access and inter-EPG isolation.
Generally, most data is transmitted in plaintext on LAN links, which brings security risks. For example, the bank account information may be stolen or tampered with. MACsec can be deployed to protect transmitted Ethernet data frames and reduce information leak and malicious attack risks.
CE switches provide NQA, and dedicated probes do not need to be deployed, reducing costs effectively.
NQA association means that NQA notifies other modules (such as VRRP, static routing, and policy-based routing modules) of detection results and other modules process services based on the detection results.
As shown in the following figure, distributed VXLAN tunnels are created in DCs A and B through BGP-EVPN to enable VMs in the DCs to communicate with each other. Leaves 2 and 3 are the edge devices connected to the backbone network. VXLAN tunnels are configured on leaves 2 and 3 through BGP-EVPN so that the VXLAN packets received from one DC can be decapsulated, re-encapsulated, and sent to the other DC. In this manner, cross-DC VXLAN packet forwarding is available, and VMs in different DCs can communicate with each other.
When DCI is implemented using the Segment VXLAN function:
DCs can run different protocols.
VXLAN packet encapsulation can be different among DCs. The three-segment VXLAN function is architecture-agnostic and allows interconnection between heterogeneous DCs.
DCs do not need to orchestrate information with each other.
Layer 3 interconnection is used, which reduces Layer 2 flooding and prevents broadcast storms in one DC from affecting other DCs.
Performance requirements on DCI devices are high. DCI devices are required to maintain tenant MAC and IP address information.
Multiple VXLAN tunnel segments need to be maintained, increasing O&M complexity.
VXLAN mapping can be implemented in local or mapping VNI mode, and the CloudFabric solution uses the VNI mapping mode.
As shown in the following figure, the VNI used by the VXLAN tunnel in DC A is 10, and the VNI used by the VXLAN tunnel in DC B is 20. The mapping between VNI 10 and VNI 30 needs to be created on leaf 2, and the mapping between VNI 20 and VNI 30 needs to be created on leaf 3. Then Layer 2 packet forwarding is available.
Use packet transmission from DC A to DC B as an example. After receiving a VXLAN packet from a device in DC A, leaf 2 decapsulates the packet, searches the VNI mapping table for VNI 30, uses VNI 30 to encapsulate the packet, and sends the packet to leaf 3. After receiving the packet, leaf 3 decapsulates it, searches the VNI mapping table for VNI 20, uses VNI 20 to encapsulate the packet, and forwards the packet within DC B.
Optimizing the queue buffer threshold to ensure that packets of lossless queues are transmitted with no loss as much as possible
Assign an appropriate buffer threshold according to the chip's forwarding capability, ensuring lossless forwarding of packets in lossless queues before PFC is triggered.
Dynamically adjusting the Explicit Congestion Notification (ECN) threshold to balance the tradeoff between low latency and high throughput of lossless queues
Dynamically adjust the ECN threshold of lossless queues based on the incast concurrency and proportions of elephant and mice flows to balance the tradeoff between low latency and high throughput of lossless queues. This technology is combined with the queue buffer optimization technology to trigger the ECN threshold to alleviate congestion before PFC is triggered.
Using fast ECN to shorten the congestion notification time
When a queue is congested, the ECN flag is added to packets as they leave the queue, rather than as they enter it. This shortens the congestion notification time.
Using fast Congestion Notification Packet (CNP) to immediately adjust the packet sending rate of the source end
When experiencing congestion, the forwarding device replaces the destination server to send a CNP to the source server. The source server then adjusts the packet sending rate, relieving congestion of the queue buffer on the forwarding device.
Identifying elephant and mice flows to ensure the low forwarding latency of mice flows in lossless queues
Identify elephant and mice flows in lossless queues and preferentially schedule packets of mice flows to reduce the forwarding latency of mice flows.
Using dynamic load balancing and selecting a least congested link to forward packets
In the multi-path scenario, measure the congestion status of each link and select a least congested link to forward packets.
During vSwitch installation, you can configure the vSwitch as a DHCP server to allocate IP addresses to VMs. A DHCP server can be deployed in distributed or integrated mode.
After a VM goes online, the OpenStack control node pre-allocates an IP address to the VM based on the network segment where the VM is located. The Agile Controller can obtain the mappings between IP and MAC addresses of VMs from the OpenStack control node. After the DHCP server function is enabled on the vSwitch, the Agile Controller sends the mappings to the vSwitch. When the vSwitch receives a DHCP Request packet from a VM, it searches the mappings between IP and MAC addresses delivered by the Agile Controller. If the vSwitch finds an IP address mapping the MAC address of the VM, it sends a DHCP Offer packet containing the IP address to the VM. After the DHCP server acknowledges the IP address that is offered, the IP address is successfully allocated to the VM.
The following figures show the schematic diagrams of the CE1800Vs used as the distributed DHCP servers.
The following figures show the schematic diagrams of the CE1800Vs used as the centralized DHCP servers.
The only difference between integrated and distributed modes is that the vSwitch allocates IP addresses to all VMs on the network in integrated mode and allocates IP addresses only to VMs on the physical server where the vSwitch is located in distributed mode. When the DHCP server is deployed in distributed mode, the vSwitch notifies the Agile Controller of the deployment mode. The Agile Controller then sends only the mappings between IP and MAC addresses of VMs on the physical server where the vSwitch is located to the vSwitch. When the DHCP server is deployed in integrated mode, the Agile Controller sends the mappings between IP and MAC addresses of all VMs on the network to the vSwitch.
To deploy the DHCP server in distributed mode, you need to install the vSwitch on a FusionSphere compute node. To deploy the DHCP server in integrated mode, you need to install the vSwitch on an independent server and then enable the DHCP server function. In an NFVI scenario, if you deploy a data center network in hybrid overlay mode, you are advised to use both distributed and integrated modes. On FusionSphere compute nodes, deploy CE1800V switches as distributed DHCP servers to provide services for VMs. Deploy two independent servers on the network, and install CE1800V switches to function as integrated DHCP servers to provide services for bare metal servers or other VMs.