T he Agile Controller-DCN is a next-generation SDN controller for enterprise and carrier DC markets and is a core component of the CloudFabric solution. For the position of the Agile Controller-DCN in the CloudFabric solution, see the logical architecture diagram of the solution.
The Agile Controller-DCN uses standard network protocols to manage network resources and interconnects with computing resources through standardized southbound interfaces to implement collaboration between computing and network resources. It can independently carry out service presentation and orchestration.
The Agile Controller-DCN also supports seamless interconnection with mainstream cloud platforms in the industry through standardized northbound interface openness capabilities. Cloud platforms provision services, and the Agile Controller-DCN maps services to logical models through interconnection interfaces and delivers the logical models to network devices.
Data is exchanged between Agile Controller-DCN and SecoManager to synchronize network service configurations and status between two control units.
The Agile Controller-DCN can centrally manage cloud data center networks and provide automatic mapping from applications to physical networks, resource pooling, and visualized O&M, helping customers build service-centric dynamic network service scheduling capabilities.
A tenant applies for DCN, storage, and computing resources. Any user, enterprise, or unit can be a tenant when applying for resources from the AC-DCN.A DCN administrator allocates different resources to different tenants, and provides differentiated services to them, ensuring on-demand resource allocation.
A tenant can create different VPCs based on service types. In the network virtualization scenario of Huawei CloudFabric DCN Solution, when a computing administrator selects a network and connects to it on a VM, the AC-DCN detects that the VM goes online. Then the AC-DCN delivers network configurations to the physical devices on which logic routers, logic switches, and logic ports are deployed to ensure that the VM can communicate with other VMs and external networks. This allows tenants to process and transmit specific service data using these resources.
In a cluster, only network control nodes can run BGP-EVPN.
Network control nodes are deployed in active/standby mode.
A network control node cannot be deployed on the same node as other modules.
Network control nodes do not support capacity expansion.
- The CE1800V proactively connects to the southbound floating IP address of the AC-DCN cluster.
- The AC-DCN receives the connection request from the CE1800V and records information about the CE1800V.
- The AC-DCN cluster internally confirms the active and standby nodes to connect to the CE1800V.
- The CE1800V successfully connects to the AC-DCN.
- Automatic discovery: Huawei's hardware switches and firewalls support the Simple Network Management Protocol (SNMP), and their IP addresses are consecutive addresses in a specific range. You are advised to use this method to automatically add devices to the AC-DCN.
- Batch import: If Huawei's hardware switches and firewalls support SNMP but with discrete IP addresses, you are advised to use templates to add devices to the AC-DCN in a batch.
- Device registration: If the AC-DCN needs to manage CE1800V switches, register device information on the AC-DCN GUI, and start the CE1800V to send a connection request. Registered CE1800Vs will be discovered by the AC-DCN.
- Third-party device import: The AC-DCN does not manage third-party devices. However, the topology displayed on the AC-DCN includes third-party devices. You can use this function to import third-party devices. Currently, information about software switches, firewalls, and load balancers from a third party can be imported.
|Device Type||Device Model||Discovery Mode|
|Load balancer||F5 load balancer||Third-party device import|
|Checkpoint FW/Fortinet FW||Third-party device import|
|Software switch (vSwitch)||CE1800V||Device registration|
|EVS/OVS||Third-party device import|
|Server||EXSi Host||Automatic discovery|
|Bare metal server||Active reporting|
|Physical machine||Manual creation|
- Centralized network overlay: Devices at both ends of VXLAN tunnels are hardware (the border leaf node is hardware as well). All VXLAN gateways are deployed on devices in one group.A VXLAN gateway acts as both the internal gateway and external gateway. (Here, external gateways are used to connect to external edge devices.)
- Distributed network overlay: Devices at both ends of VXLAN tunnels are hardware (the border leaf node is hardware as well). VXLAN gateways are deployed on devices in different groups. The gateways connected to servers and VMs aredistributed L3 gateways, and those connected to external edge devices are egress L3 gateways.
- Distributed hybrid overlay: Devices at both ends of VXLAN tunnels can be hardware or software (the border leaf node is hardware). VXLAN gateways are deployed on devices in different groups. The gateways connected to servers and VMs are distributed L3 gateways, and those connected to external edge devices are egress L3 gateways.
- FWs/LBs connected to service leaf nodes in bypass mode: FWs and LBs connect to service leaf nodes in bypass mode.
- FWs/LBs connected to border leaf nodes in bypass mode: FWs and LBs connect to border leaf nodes in bypass mode. In this scenario, the border Leaf and service leaf are deployed together.
- FWs directly connected to external network: FWs directly connect border leaf and external network.
The Agile Controller-DCN or SecoManager does not deliver configurations to third-party FWs. The configurations need to be delivered by a third-party system or delivered manually.
Logic switches connect different VMs and enable them to communicate with each other at Layer 2.
One network device can be virtualized into multiple logic switches for different tenants to use. That is, multiple tenants can share one network device. For each tenant, a logic switch functions as an independent and real switch, and has independent software and hardware resources and running space. Services on different logic switches do not affect each other.Logic switches provide L2 switching services among logic ports.
Logic routers are virtualized from network devices running virtualization software (for example, the Virtual System on CE series switches). Logic routers connect VMs on different network segments and enable them to communicate with each other.
One network device can be virtualized into multiple logic routers for different tenants to use. That is, multiple tenants can share one network device. For each tenant, a logic router functions as an independent and real router, and has independent software and hardware resources and running space. Services on different logic routers do not affect each other.
An EndPort corresponds to a server-side VM that accesses the VPC or a device (such as a third-party load balancer) connected to the Agile Controller-DCN in a VM-like manner.
One physical port on a network device can be virtualized into multiple logical ports for different tenants to use. In this manner, multiple tenants can access the network through one physical port. For each tenant, a logical port functions as an independent and real port.
Logical ports enable access to VMs, BMs, and L4-L7 devices.
An external network refers to the network outside a DC, for example, the Internet or the existing private network of an enterprise. A data center network must be able to communicate with external networks.
An external gateway can be considered as a gateway for services in a DC to interwork with external networks. In the Agile Controller-DCN, external gateways are an important logical model and connect to logical routers to implement interconnection and interworking between internal networks of DCs and external networks.
External gateways in the Agile Controller-DCN can be classified based on usage scenarios:
Public external gateways: used for VMs to access the Internet
Private external gateways: used for VMs to access public service networks
External gateways in the Agile Controller-DCN can be classified based on connection modes:
L2 external gateways: In NFVI scenarios, TOR switches connect to PEs through Layer 2 ports, and physical ports need to be configured on the Agile Controller-DCN. The cloud platform delivers information about the VLANs connecting to PEs.
L3 external gateways: If a gateway connects to a PE through a Layer 3 port, an L3 external gateway needs to be deployed to enable network communication between the gateway and PE.
None-type external gateways: In a cloud-network integration scenario with multiple fabrics, a none-type external gateway is created on the Agile Controller-DCN, with no connection information configured and only the fabrics and gateway group specified. After the VPC of the cloud platform is associated with the none-type external gateway, the VPC can use the fabrics and VAS resources specified by the external gateway.
The Agile Controller-DCN connects to the out-of-band management network ports on network devices through an independently deployed out-of-band management switch, and manages and controls the network devices through an independent out-of-band network.
No independent management switch and network are configured. The Agile Controller-DCN directly connects to the service network through a service switch, and manages and controls network devices through the underlay layer of the service network.
A tenant can define virtual networks consistent with traditional networks in a VPC. Resources in the VPC belong only to the tenant. A tenant can access only its own VPC. A public VPC can be accessed by all tenants.
Three-level topology visibility means that the application, logical, and physical network topologies are mutually visible. The Agile Controller-DCN transforms a physical network into physical, logical, and application networks. The network administrator monitors the three networks at the same time and can sense the mapping among the networks so that the topology layer is clear.
Three-level topology visibility consists of topology visibility between application and logical networks and topology visibility between physical and logical networks.
Logical network information can be mapped and displayed in the physical network topology.
Logical network resources used by an application network and physical network resources used by a logical network can be displayed in the application network topology in top-down mapping mode.
Related tenant and application information of a physical resource, such as device, link, and port, can be displayed. (The bottom-up mapping allows users to identify tenants and applications affected by physical network changes in advance.)
When network resources on an underlay physical network change, such as device restart or link disconnection, the corresponding logical and application networks can be updated.
When gateways are connected to firewalls, VLANs and IP addresses need to be configured for the interconnected interfaces and assigned to resource pools. Then the Agile Controller-DCN can automatically allocate the VLANs and IP addresses, simplifying network deployment.
When planning a network, users need to configure global resources, such as bridge domains, global VNIs, global VLANs, public IP address, interconnection IP addresses, sub-interfaces, and loopback interface, on the Agile Controller-DCN. After receiving a service provisioning task, the Agile Controller-DCN can select resources as required and deliver them to tenants.
An End Point Group (EPG) is a set of devices that have the same attributes. It can be configured during service function chain creation to determine the start and end points of a service function chain or microsegmentation.
In the cloud-network integration scenario, services are delivered from the cloud platform to the Agile Controller-DCN. The Agile Controller-DCN orchestrates some of the services again, including displaying delivered information, recreating services, and editing delivered services, to provide more complete service capabilities for users.
If data inconsistency occurs between the Agile Controller-DCN and the forwarder, VMM, or cloud platform, services will fail to run properly. In this case, the data consistency verification function of the Agile Controller-DCN can be used to locate and correct inconsistent data, helping quickly rectify service faults.
To ensure cluster reliability and reduce impacts of major disasters to services, two Agile Controller-DCN clusters in different areas can work in active/standby mode. When the active cluster fails, the standby cluster immediately takes over services.
The active Agile Controller-DCN cluster provides services for external systems. The standby cluster runs properly but does not process services.
The two Agile Controller-DCN clusters exchange heartbeat packets to synchronize status and data. When detecting the failure of the active cluster through heartbeat detection, the standby cluster becomes active and takes over services.
The arbitration node monitors the status of both active and standby clusters. When the heartbeat between the active and standby clusters is interrupted, the clusters query the arbitration result from the arbitration node and change their status to active or standby according to the result, preventing an active-active situation.
The core technology of Service Function Chain (SFC) is Network Service Header (NSH), which can use IP over NSH over ETH over VXLAN or IP over NSH over VLAN based on the overlay network and transmit the packets on the bearer network. During packet forwarding, NSH technology does not detect topology changes of the bearer network. A forwarding node only needs to know a location of a service function (SF) for packet forwarding. Therefore, when the network topology changes, service iteration changes can be quickly implemented.
After packets enter an SFC domain, NSH encapsulation is required. The following figure shows the encapsulation formats of VXLAN and VLAN packets.
The following figure shows the format of an NSH packet.
The following table describes fields in an NSH packet.
|Ver||NSH version number. Currently, only the version number of 0 is supported.|
|C||Key metadata exists. When the MD type is 1, the value of this field is 0.|
|Reserved||This field is set to 0 when an NSH packet is sent and is ignored when an NSH packet is received.|
|Length||Total length of an NSH packet. The value of this field indicates an integral multiple of 4 bytes. If the MD type is set to 1, the length is fixed at 6, indicating that the packet length is 6 x 4 bytes (24 bytes). If the MD type is set to 2, the length is 2 or greater.|
Currently, only the MD type of 1 is supported.
Packet type before NSH encapsulation.
|SPI||ID of an SFP.|
|SI||Index of the SF through which traffic is passing.|
|Metadata||Metadata field. It is the basic element used for exchanging context information. The field length can be fixed or variable based on the MD type. Currently, only the fixed length is supported. The device does not support metadata editing. The value of the metadata field is 0.|
Service Function Chain (SFC) logically connects services on network devices, forming an ordered service set. It ensures that specified service flows obtain VASs in the specified sequence.
A simple example is used to help understand SFC. The following figure shows that SFC can be used to define the traffic from the Internet to the web server in a data center. The traffic must pass through the firewall, IDS, load balancer before reaching the web server.
The following figure shows the forwarding mode of an SFC.
The following table describes related concepts.
|SFC domain||A domain where service functions (SFs) are deployed is called an SFC domain.|
|Service classifier (SC)|
An SC is located at the ingress of an SFC domain and classifies incoming traffic to the SFC domain. The classification granularity is determined by the SC capability and SFC policy. For example, in coarse-grained scenarios, all packets on a port match an SFC rule and are forwarded along SFP A; in fine-grained scenarios, only packets meeting 5-tuple requirements can match an SFC rule and are forwarded along SFP B.
|Service function (SF)|
An SF list includes but is not limited to firewalls, load balancers (LBs), application accelerators, lawful interception (LI) devices, and network address translation (NAT) devices. Multiple SFs may exist in an SFC domain.
Depending on network service header (NSH) encapsulation awareness, SFs are classified into NSH-aware SFs and NSH-unaware SFs. NSH-aware SFs can identify and process received NSH packets, while NSH-unaware SFs cannot identify NSH packets and discard them.
|Service function forwarder (SFF)||An SFF forwards received packets to SFs associated with it based on NSH encapsulation information. SFs process the packets and return them to the associated SFF. The SFF then determines whether to send the packets back to the network.|
|SFC proxy||An SFC proxy is located between an SFF and NSH-unaware SFs associated with the SFF. The SFC proxy receives packets from the SFF on behalf of SFs. After deleting NSH encapsulation information, the SFC proxy sends packets to NSH-unaware SFs through the local logical component. When receiving packets from NSH-unaware SFs, the SFC proxy adds NSH encapsulation information to the packets and sends the packets to the SFF. For the SFF, the SFC proxy is an NSH-aware SF.|
|Service function path (SFP)|
An SFP is calculated based on the user configuration, precisely defining the location of each SF.
SFC is typically used on a VXLAN network. The following describes how SFC works when NSH-aware SFs connect to the distributed VXLAN gateway.
In distributed VXLAN gateway networking, SFs connect to the network in routing mode. Packets pass through two firewalls (SF1 and SF2) in sequence and are forwarded by the egress gateway. The VXLAN gateway connected to a tenant server is used as the SC, and VXLAN gateways connected to SFs are used as SFFs. An SFF can forward NSH packets to the next-hop SF or SFF. If SF1 to SF3 are NSH-unaware SFs, the SFFs need to provide the SFC proxy function. North-south traffic marked in solid lines is used as an example, and the SC, SFF1, and SFF2 are leaf devices, as shown in the following figure.
The following figure shows the abstracted SFP. The following figure shows the formats of packets on outbound interfaces during traffic forwarding.
Traffic reaching an SC is classified based on 5-tuple information and then redirected to the SFC. The SC queries the NSH forwarding table based on the SPI or SI in the NSH. The next hop is the IP address of a VBDIF interface on SFF1, and the outbound interface is on a VXLAN tunnel. The SC removes the ETH header in a packet and encapsulates the packet with NSH and VXLAN headers. The VNI in the VXLAN header is the same as the VNI of the tenant VPN instance. Based on the DIP in the VXLAN header, the SC can obtain the ARP outbound interface.
SFF1 receives an IP over NSH over ETH over VXLAN packet, removes the VXLAN and ETH headers, and queries the NSH forwarding table based on the SPI or SI. The next hop is the interface IP address of SF1. SFF1 constructs a new ETH header based on the ARP information queried based on this IP address.
Upon receiving the packet, SF1 removes outer encapsulation, analyzes the packet, and decreases the SI by 1. SF1 then encapsulates the packet with NSH and ETH headers, and forwards the packet to SFF1.
Upon receiving the encapsulated packet, SFF1 removes the ETH header of the packet and queries the NSH forwarding table based on the SPI or SI in the NSH. The next hop is the interface IP address of SFF2. SFF1 then encapsulates the packet with NSH, ETH, and VXLAN headers.
Upon receiving the encapsulated packet, SFF2 removes the VXLAN and ETH headers of the packet, and queries the NSH forwarding table based on the SPI or SI in the NSH. The next hop is the interface IP address of SF1. SFF2 constructs a new ETH header based on the ARP information queried based on this IP address.
Upon receiving the packet, SF2 removes outer encapsulation, analyzes the packet, and decreases the SI by 1. SF2 then encapsulates the packet with NSH and ETH headers, and forwards the packet to SFF2.
Upon receiving the encapsulated packet, SFF2 determines whether the SI is the same as the SI of the last hop. If so, SFF2 removes the NSH, and normally processes and forwards the packet out of the SFC domain. If not, SFF2 queries the NSH forwarding table to continue forwarding the packets in the SFC domain.
The preceding packet forwarding processes apply when SFs are NSH-aware. If SFs are NSH-unaware, the SI is processed in a different way. NSH-unaware SFs do not process NSH packets. Therefore, the SFC proxy is responsible for decreasing the SI by 1.
The SecoManager is a security controller that manages Huawei firewalls in a unified manner.
Users can drag, add, delete, and batch process logical network resources on the SecoManager based on the service model, simplifying deployment requirements. The SecoManager carries out unified visualized management of physical and virtual resources, and clearly displays network resources, topologies, and paths, making cloud-based networks visible and controllable.
In the CloudFabric solution, data is exchanged between Agile Controller-DCN and SecoManager to support VAS resource pool creation and maintenance and VAS configuration. In addition, the SecoManager connects to the CIS and gets associated with security analysis and decision-making of the CIS. The SecoManager delivers security analysis results and decisions to the firewall to implement end-to-end association between threat detection, security analysis, and policy delivery.
A physical device (for example, a firewall) can be partitioned into multiple independent virtual systems. Every virtual system functions as a real device and has its own interfaces, address sets, users/user groups, routing entries, and policies. It can be configured and managed by the virtual system administrator.
In conventional firewall security policy configuration, administrators must be clear about the firewall position and traffic direction and configure security policies on firewalls through which the traffic passes. This manual analysis and configuration method is time-consuming and prone to errors. Huawei provides security service orchestration based on protected network segments. By detecting relationships between IP addresses and firewalls through protected network segments, the SecoManager automatically delivers security policies to appropriate firewalls.
As the basic model of security service orchestration, protected network segments specify the IP network segments that firewalls need to protect.
Protected network segments can be configured manually, or the SecoManager can collaborate with the Agile Controller-DCN to automatically learn protected network segments.
Before a new policy is deployed, its adaptability to existing applications must be evaluated. The SecoManager works with the CIS to provide mutual access relationships between application components and identify blocked mutual access relationships in a timely manner, ensuring timely and accurate policy deployment and validation.
Based on the collaboration between the SecoManager and CIS, administrators can know the usage of firewall policies, for example, which policies have not been used for a long period and which policies are used most frequently. Then administrators can optimize policies (for example, delete unnecessary policies) to ensure service security and maximize the firewall performance and efficiency.