Egress Network Design
Security Zone Design
Security Zone Overview
A security zone is a collection of networks connected through one or more interfaces. Users on the networks in a security zone have the same security attributes. Most security policies are implemented based on security zones. Each security zone identifies a network, and a firewall connects networks. Firewalls use security zones to divide networks and mark the routes of packets. When packets travel between security zones, security check is triggered and corresponding security policies are enforced. Security zones are isolated by default.
Generally, there are three types of security zones: trusted, DMZ, and untrusted.
- Trusted zone: refers to the network of internal users.
- DMZ: demilitarized zone, which refers to the network of internal servers.
- Untrusted zone: refers to untrusted networks, such as the Internet.
Security Zone Planning
A campus network itself is considered secure, but is faced with security threats from the outside. Therefore, allocate the Internet to the untrusted zone and the campus network to the trusted zone. Deploy security devices at the campus network egress to isolate the intranet from the Internet and defend against external threats. Allocate the data center to the DMZ, and deploy firewalls in the DMZ to isolate traffic between the campus intranet and servers in the data center.
In the virtualized campus network solution, when user gateways are located inside the fabric, each egress of the fabric's external network resources corresponds to a Layer 3 logical interface on the firewall. During egress planning for external network resources, VNs with the same security policy have been divided according to different logical egresses. Therefore, in this solution, security zones can be divided based on the interfaces of external network resources, and each logical interface is assigned a security zone, as shown in Figure 2-49. If user gateways are located outside the fabric, you need to bind the gateways to security zones based on the security policies of the gateways.
Hot Standby Design
When firewalls function as egress devices, you are advised to deploy hot standby (HSB) to improve firewall reliability. As illustrated in Figure 2-50, the firewalls act as egress devices of the campus network and are directly connected to the stacked core switch. The two firewalls are configured to work in HSB mode, and the member links in their interconnected Eth-Trunk are in active/standby mode. When the active firewall is faulty, the standby firewall takes over services and forwards service packets.
Egress Route Design
Egress routes of the campus network are used for north-south communication between the campus intranet and external networks. When a firewall is used as the egress device, you need to consider the routes from the firewall to external networks and those between the firewall and the core switch.
Routes from the Firewall to External Networks: Intelligent Traffic Steering
If the campus network connects to only one Internet Service Provider (ISP) network, you do not need to perform refined control on the routes to the external network. In this case, you can configure a default route on the firewall and set the next hop of the route to the PE on the ISP network.
If the campus network connects to multiple ISP networks, users can access Internet resources through different ISP networks. To properly utilize egress links and ensure egress access quality, you are advised to configure the intelligent traffic steering function on the firewall. In this scenario, it is recommended that you deploy the ISP-based traffic steering function. This function routes traffic destined for a specific ISP network out through the corresponding outbound interface, ensuring that traffic is forwarded on the shortest path.
As shown in Figure 2-51, the firewalls each have two ISP links to the Internet. If a campus network user accesses Server 1 on ISP 2 network and the firewall has equal-cost multi-path routing (ECMP) routes, the firewall can forward the access traffic from two different paths to Server 1. Apparently, path 1 is not the best path, and path 2 is the most desired path. After you configure ISP-based traffic steering, when an intranet user accesses Server 1, the firewall selects an outbound interface based on the ISP network where the destination address resides to enable the access traffic to reach Server 1 through the shortest path, that is, path 2 in Figure 2-51.
Routes Between the Firewall and the Core Switch
North-south routes are present between the firewall and the core switch, including routes from the campus intranet to external networks on the core switch as well as return routes from external networks to the campus intranet on the firewall. In the virtualization solution for large- and medium-sized campus networks, under the external network resource model designed for the fabric, the routing protocol used for Layer 3 connectivity between the firewall and the core switch (border node) can be static routing, OSPF, or BGP. Generally, two firewalls are deployed in HSB mode to ensure reliability. When selecting a routing protocol, take into consideration how to switch the service traffic path in an active/standby switchover scenario.
- Static routing
If two firewalls implement HSB by operating as a VRRP master and a VRRP backup, static routing is recommended. As illustrated in Figure 2-52, a default static route is configured on the core switch, with the next hop being the virtual IP address of the VRRP group. When the master firewall in the VRRP group is in the master state, it responds to the ARP request containing a virtual IP address sourcing from the core switch. In this way, the service traffic on the core switch can be diverted to the master firewall for processing.
In the event of a failure on the master firewall, the backup firewall in the VRRP group becomes the new master and broadcasts a gratuitous ARP packet that carries the virtual IP address of the VRRP group and the MAC address of the corresponding interface (virtual MAC address is carried if the virtual MAC address function is enabled on the interface). After receiving the gratuitous ARP packet, the core switch updates its ARP table. Thus, the service traffic path is switched to the backup firewall.
- Dynamic routing
If VRRP is not deployed on firewalls, dynamic routing can be used to implement automatic switching of the service traffic path. In this case, you need to run the hrp standby-device command on the standby firewall to set it to the standby state. As shown in Figure 2-53, OSPF is used as an example. When both the active and standby firewalls work properly, the active firewall advertises routes based on the OSPF configuration, and the cost of the OSPF routes advertised by the standby firewall is adjusted to 65500 (default value, which can be changed). In such a scenario, the core switch selects a path with a smaller cost to forward traffic, and all service traffic is diverted to the active firewall for forwarding.
If the active firewall is faulty, the standby firewall converts to the active state. In addition, the VRRP Group Management Protocol (VGMP) adjusts the cost of the OSPF routes advertised by the active firewall to 65500 and that of the OSPF routes advertised by the standby firewall to 1. After route convergence is complete, the service traffic path is switched to the standby firewall.
For details about the deployment when using different routing protocols between the firewall and the core switch, see the external network design in Fabric Network Design.
Security Policy Design
After security zones are divided on the firewall based on source interfaces, the security zones are isolated by default. To allow communication between two security zones, for example, to enable users within the campus intranet to access the Internet, Layer 3 routes need to be configured on the firewall. In addition, you need to create security policies on the firewall to allow the traffic between the security zones. Security policies also provide advanced security protection functions to analyze security threats in each zone and ensure secure, trustworthy access sources.
As shown in Figure 2-54, with security policies configured, VNs on the campus network can communicate with each other, and the external network can access servers in the DMZ. In addition, different security protection policies can be implemented for traffic in different security zones.
Table 2-25 describes the recommended security policy design for common zones.
Access Network Represented by the Security Zone |
Access Source |
Trustworthiness |
Recommended Security Policy |
---|---|---|---|
Internet |
External users |
Untrusted |
Intrusion detection, URL filtering, and antivirus |
Employees on the go |
Medium |
||
WAN |
Enterprise branch |
Medium |
Intrusion detection and antivirus |
Intranet |
Enterprise employees |
High |
Intrusion detection and antivirus |
Guests |
Low |
NAT Design
Network Address Translation (NAT) is an address translation technology that translates both the source and destination IP addresses of packets. To allow campus intranet users using private IP addresses to access the Internet, configure NAT. As demonstrated in Figure 2-55, if firewalls function as egress devices, pay attention to the following points when configuring NAT:
- If private IP addresses are used on the intranet, source NAT technology needs to be used to translate source IP addresses of packets to public IP addresses when user traffic destined for the Internet passes through the firewall. Network Address Port Translation (NAPT) is recommended to translate both IP addresses and port numbers, which enables multiple private addresses to share one or more public addresses. NAPT applies to scenarios with a few public addresses but many private network users who need to access the Internet.
- If intranet servers are used to provide server-related services for public network users, destination NAT technology is required for translating destination IP addresses and port numbers of the access traffic of public network users into IP addresses and port numbers of the servers in the intranet environment.
- When two firewalls operate in VRRP hot standby (master/backup) mode, IP addresses in the NAT address pool may be on the same network segment as the virtual IP addresses of the VRRP group configured on the uplink interfaces of the firewalls. If this is the case, after the return packets from the external network arrive at the PE, the PE broadcasts ARP packets to request the MAC address corresponding to the IP address in the NAT address pool. The two firewalls in the VRRP group have the same NAT address pool configuration. Therefore, the two firewalls send the MAC addresses of their uplink interfaces to the PE. In this case, you need to associate the hot standby status (master/backup) of the firewalls with the NAT address pool on each firewall, so that only the master firewall in the VRRP group responds to the ARP requests initiated by the PE.