No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

NAT Server didn't work due to wrong policy-based route in USG2200

Publication Date:  2014-07-30 Views:  127 Downloads:  0
Issue Description
A customer configured NAT Server for two internal servers in USG2200, and he can access one internal Server but can’t access the other one from outside.
This is the topology:

The test result:
Scanning ports on 105.x.y.100
105. x.y.100 is responding on port 80 (http).
105. x.y.100 is responding on port 443 (https).

Scanning ports on 105.xx.yy.101
105. x.y.101 isn't responding on port 80 (http).
105 .x.y.101 isn't responding on port 443 (https).
But from the PC in the same LAN can access the server via the private IP10.20.1.14. So it seems that this issue was caused by USG2200.


The related configuration in USG2200:

//the Nat Server is normal
nat server 16 protocol tcp global 105.x.y.100 www inside 10.20.2.14 www
nat server 17 protocol tcp global 105.x.y.100 443 inside 10.20.2.14 443

//The Nat Server is abnormal
nat server 18 protocol tcp global 105.x.y.101 www inside 10.20.1.14 www
nat server 19 protocol tcp global 105.x.y.101 443 inside 10.20.1.14 443


//The public IP
#
interface Vlanif152
alias Vlanif152
ip address 105.x.y.60 255.255.255.248
ip address 105.x.y.100 255.255.255.240 sub
ip address 105.x.y.101 255.255.255.240 sub
#
Alarm Information
None
Handling Process
(1) Check the Packet filter policy, and we found that this is no block.

firewall packet-filter default permit interzone trust untrust direction inbound
firewall packet-filter default permit interzone trust untrust direction outbound

policy interzone trust untrust inbound
policy 7
  action permit
  policy logging
  policy destination 10.20.1.14 mask 32

(2) For such basic forward service issue, it is a basic troubleshooting method to check the firewall session table and the firewall traffic statistics. For better troubleshooting, we added a NAT Server command for ping.
nat server 31 protocol icmp global 105.x.y.101 inside 10.20.1.14
Then we did ping 105.x.y.101 from public network, and we can see that the server can’t be accessed normally.
From the firewall session table, we can see that the ping packet reached the server 10.20.1.14, and firewall had sent it out. There is no packets loss in firewall.
[USG2200]display firewall session table verbose protocol icmp
11:07:29  2014/05/20
Current Total Sessions : 1
  icmp  VPN:public --> public
  Zone: untrust--> trust  TTL: 00:00:20  Left: 00:00:15
  Interface: Vlanif1201  NextHop: 10.20.1.14  MAC: 00-50-56-84-30-1a
  <--packets:337 bytes:20220   -->packets:337 bytes:20220
  202.x.y.142:25180-->105.x.y.101:2048[10.20.1.14:2048]

From the traffic statistic, we can see that there was no packet loss.
[USG2200]display firewall statistic acl
11:07:38  2014/05/20

Current Show sessions count: 1
 
Protocol(ICMP) SourceIp(202.x.y.142) DestinationIp(105.x.y.101) 
SourcePort(25180) DestinationPort(2048) VpnIndex(public) 
           Receive           Forward           Discard 
Obverse : 8          pkt(s) 8          pkt(s) 0          pkt(s)
Reverse : 8          pkt(s) 8          pkt(s) 0          pkt(s)

 
Discard detail information:

From the above information, we learnt that the route between public network and the inner Server 10.20.1.14 is reachable. The USG2200 had sent the response packet to public network, but the original PC didn’t receive the reply. It seems that the response packet was dropped by upper link. We don’t know why.

(3) But when the inner IP is 10.20.2.14, the server can provide normal service to public IP. It’s so strange that we can’t believe that the carrier only deny 105. x.y.101,  and permit 105. x.y.100.
Then we checked the configuration again, and found that there was a policy-based route related 10.20.2.14.
acl number 3011
rule 3 deny ip source 10.20.1.0 0.0.0.255 destination 10.20.4.0 0.0.0.255
rule 4 deny ip source 10.20.2.0 0.0.0.255 destination 10.110.0.0 0.0.255.255
rule 5 deny ip source 10.20.2.0 0.0.0.255 destination 10.20.0.0 0.0.255.255
rule 10 permit ip source 10.20.2.10 0
rule 15 permit ip source 10.20.2.14 0

policy-based-route www443 permit node 10
  if-match acl 3011
  apply ip-address next-hop 105.x.z.97

interface Vlanif1202
ip address 10.20.2.2 255.255.255.0
ip policy-based-route www443

But there is no policy-based route for 10.20.1.14. For 10.20.1.14, there was only the default route.
ip route-static 0.0.0.0 0.0.0.0 105.x.z.33
ip route-static 0.0.0.0 0.0.0.0 105.x.z.97 preference 100

(4) Maybe this was the root cause. So we did a test, added policy-based route for 10.20.1.14.

acl number 3015
rule 5 permit ip source 10.20.1.14 0

policy-based-route test permit node 10
  if-match acl 3015
  apply ip-address next-hop 105.x.z.97

#
interface Vlanif1201
ip address 10.20.1.2 255.255.255.0
ip policy-based-route test
#
After that, the customer can access the server 10.20.1.14 from public IP. Until now we don’t know why when the next hop is 105.x.z.33, the access can’t reach, but when the next hop is 105.x.z.97, the service is normal. It belongs to the upper link device or carrier, and it will need more information to analyze.
Root Cause
1) Packet filter policy blocks the data packet.
2) The route is not reachable.
3) The carrier restricts some special IPs.
4) Other cause.
Suggestions
(1) There is an important command we can use to check that the reverse direction is different between accessing 10.20.1.14 and 10.20.2.14.
        [USG2200-diagnose]display firewall session table verbose-hide both-direction
      If we use this command to check the next reverse hop, we should have found the root cause earlier.
(2) For NAT Server issue, when one inner IP works but the other not work, it is a good method to compare the reverse next hop.

END