No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

ME60 Troubleshooting Guide V1.0 (VRPv8)

This document provides the maintenance guide of the device, including daily maintenance, emergence maintenance, and typical troubleshooting.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
MTU Troubleshooting

MTU Troubleshooting

MTU Problem Troubleshooting

MTU Problem Description

On networks, MTU definitions and fragmentation mechanisms vary according to vendors and models of devices. Carrier networks with MTU inconsistency problems face the challenges to user experience. The challenges stem from network problems, such as lag in online games, link rot, delivery failure of emails with attachments, and errors in opening dialog boxes. The problems are usually caused by interface MTU inconsistency.

In addition, interface MTU inconsistency also causes OSPF, IS-IS, L2VPN, and VPLS connection failures.

MTU Troubleshooting Procedure

The troubleshooting procedure is as follows:

  1. Analyze the path through which data packets pass.
  2. View MTU values of outbound interfaces along the path through which data packets pass and MTU values of transmission devices between nodes.
  3. Send Ping packets with sizes greater than, less than, and equal to the interface MTU to each node along the path.

    If the ping packets with sizes greater than the interface MTU fail to be forwarded and the ping packets with sizes less than and equal to the interface MTU are successfully forwarded, the MTU settings cause the problem.

  4. Analyze the headers in the dropped ping packets.
  5. Increase the interface MTU to the size of the largest dropped packet.

    The interface MTU setting must be based on the board-specific MTU fragmentation mechanisms on Huawei routers and MTU definitions on vendor-specific devices.

  6. Repeat steps 3, 4, and 5 until no packets are dropped.
MTU Troubleshooting in MPLS Scenarios

The troubleshooting procedure in MPLS scenarios is similar to the typical MTU troubleshooting procedure. In addition, note the following issues:

  • In ME60, MPLS VPN scenario, Ps forward MPLS packets in a best-effort way without checking MPLS packet sizes or fragmenting MPLS packets based on specified MTU values.

    A P can fragment IP data encapsulated in an MPLS packet based on an MTU value only when the following conditions are met:
    1. Fragmentation-enabled subcards running.
    2. Fragmentation-enabled subcards provides both P NNIs and PE NNIs.
    3. The mpls l3vpn fragment enable command is run on Fragmentation-enabled subcards to enable L3VPN fragmentation.
  • If some MPLS packets with sizes greater than the MPLS MTU are discarded by Ps, increase the interface MTU, not the MPLS MTU, to minimize packet loss. Increasing the MPLS MTU alone cannot resolve this problem, because if an MPLS MTU is greater than an interface MTU, the interface MTU value takes effect.
  • Although ME60 forward MPLS packets greater than the MPLS MTU in a best-effort way, some ohter devices may discard these packets. Therefore, analyze the number of labels in MPLS packets during the troubleshooting process on a network with both ME60 and other devices.
NOTE:
When penultimate hop popping (PHP) is enabled on an LSP within an MPLS L3VPN, the penultimate ME60 removes the last label from an MPLS packet and forwards an IP packet to the egress based on the MPLS MTU, not the interface MTU.

MTU Typical Fault Cases

L3VPN Users Fail to Access Some Websites
Symptom

On the MPLS L3VPN shown in Figure 4-41, users attached to the customer edge (CE) cannot access some websites.

Figure 4-41 MPLS L3VPN

Trap Information

N/A

Cause Analysis

Perform the following steps:

  1. Analyze the path through which data packets pass.

    The network topology shows that L3VPN user packets pass through two routers named PE1 and PE2. PE1 is a Huawei router, and PE2 is a non-Huawei router.

  2. View MTU values configured on interfaces along the path.

    The interface MTUs on PE1 and PE2 are 1500 bytes.

  3. Enable PE1 to send ping packets within a specific VPN to PE2.

    The ping vpn is successful when the ping packet sizes are less than or equal to 1500 bytes, whereas the ping fails when the ping packet sizes are greater than 1500 bytes. The incorrect MTU setting causes the L3VPN user access failure.

  4. Analyze L3VPN user packet headers.

    Each L3VPN user sends a request to a web server using the Hypertext Transfer Protocol (HTTP) over a TCP connection.

    The data packets replied by web server are large. If a 1500-byte response IP packet is sent to PE2, PE2 adds two labels (2 bytes + 2 bytes) into the packet and sets the DF field to 1 before forwarding the packet. The packet becomes 1508 bytes long. PE2 finds that the packet size is greater than the interface MTU and discards the packet. Then, PE2 replies with an ICMP Datagram Too Big message to the web server.

    • If the web server reduces the packet size to a value less than 1500 bytes, the response from the web server can reach the L3VPN user, and the L3VPN user can access the web server.
    • If the Datagram Too Big message cannot reach the web server or the web server receives this message but does not change the MTU value, the web server still sends 1500-byte packets. Upon receipt, PE2 discards the packets. As a result, the L3VPN user cannot access the website

    The preceding analysis shows that the incorrect MTU setting causes the L3VPN user access failures.

Troubleshooting Procedure
  1. Increase the MTU on the NNI on each PE to 1508 bytes.

    After the modification, the L3VPN user cannot access any websites.

  2. Check the path through which packets pass. A transmission device resides between PE1 and PE2.

  3. Check the MTU fragmentation mechanism on the transmission device.

    The transmission device calculates a packet size based on the IP MTU plus 18 bytes (DMAC + SMAC + Length/Type + CRC). An L3VPN packet (1508 bytes) and 18 bytes are 1526 bytes, while the MTU value on the transmission device is 1524 bytes. The transmission device discards packets with the sizes greater than 1524 bytes. As a result, the L3VPN user device attempts to resend HTTP packets over a TCP connection but fails to access all websites.

  4. Change the MTU on the transmission device to a value greater than or equal to 1526 bytes so that the transmission device does not discard user packets.

  5. Initiate a ping.

The ping packets that are 1508 bytes long can reach the destination. The L3VPN user can access all websites, and the problem is resolved.

Suggestion

If an MTU fault occurs, check MTU settings on both network devices and transmission devices.

Check the label size in the MTU setting because the size of an MPLS VPN packet with labels is greater than the IP packet size.

VPN Site Cannot Ping with Jumbo Frame of DF=1
Symptom
As shown in Figure 4-42, the MPLS backbone has two networks:
  • One is an MPLS L2 aggregated network which allows jumbo frame to pass; the maximum length of the IP datagram (IP header + IP payload) is 9000Bytes. The sites attached to the L2 aggregated network can access each other, and can access websites in the Internet.
  • The other is an MPLS L3 IP backbone network, all the interface MTUs of the routers use the default value.
Now, the MPLS L3 IP backbone network is connected with the MPLS L2 aggregated network. After the connection is completed, sites spanning the L2 and L3 IP backbones cannot ping through each other with the ping 8973 Bytes.
Figure 4-42 MPLS VPN network

Trap Information

None

Cause Analysis

The sites attached to the L2 network can access each other, and can access websites in the Internet. So, the trouble may exist between PE2 and site2.

Do the following steps to allocate the trouble.

  1. Check whether site1 can ping though site2 or not by using ping destination-address command.

    • If not, there is a route problem, please troubleshooting the route between the site 1 and site 2.

    • If yes, go to the step 2.

  2. Check whether PE3 can ping though PE4 or not by using the ping -f -s packetsize -vpn-instance vpn-name destination-ip-address command. (Note: packetsize >9000 bytes - 20 bytes IP header - 8 bytes ICMP header = 8972 bytes, because packetsize indicates the ICMP pay load length, not including ICMP header and IP header. The packet is originated from CPU, so the fragmentation is calculated only including IP header and IP payload, not including Label. The L3VPN packet fragment is based on the MTU set in Tunnel interface if the L3 VPN using TE tunnel and L3 VPN packet is originated from CPU).

    • If yes, check the MTU of the CE-facing interface on PE3 and PE4, and the PE-facing interfaces of the edge device on site2, and set all of them to 9000 bytes.

    • If not, go to the step 3.

  3. Check the MTU and MPLS MTU values of outbound interfaces along the path through which ping -vpn-instance packets pass, including the MTU values of transmission devices and L2 switches between nodes.

    • If the L3VPN uses TE tunnel or LDP over TE, the ping -vpn-instance packets on PE are sent through the tunnel interface, so the MTU value on tunnel interface also take effect on the ping -vpn-instance packets.
    • If the L3VPN uses LDP LSP(not including LDP over TE), the ping -vpn-instance packets on PE are not sent through the tunnel interface, the MTU value on tunnel interface does not take effect on the ping -vpn-instance packets.
    • If there is transmission device or L2 switch between routers, please make the transmission devices or L2 switches allow IP packets with 9000 Bytes to go through.
Troubleshooting Procedure

If the steps stated above are done, to allow IP packet of 9000 Bytes to pass the L3 network, do the flowing steps:

  1. Modify all the interface MTU value on UNI (User-to-Network Interface) to 9000 bytes

  2. Modify the MTU & MPLS MTU value of NNI (Network-to-Network Interface) to 9000+ 4*N (the value of N indicates the number of the MPLS labels in the MPLS packet, which depends on the L3VPN tunnel type, for detailed information, see Number of labels carried in an MPLS packet in various scenarios.

  3. If the L3VPN uses TE tunnel or LDP over TE, also modify the interface MTU and MPLS MTU value on tunnel interface to 9000 bytes + 4*N (the value of N indicates the number of the MPLS labels in the MPLS packet).

    NOTE:

    For calculation methods of MPLS L3VPN packet length during packet fragmentation, see "MPLS MTU Fragmentation".

Suggestion

In MPLS L3VPN network, the interface MTU and MPLS MTU values on core routers' NNIs are recommended to be greater than those on core routers' UNIs so that the NNIs can forward labeled packets sent by the UNIs.

To enable core routers to support more types of labeled packets, increase both the interface MTU values and MPLS MTU values.

OSPF Neighbor Relationship Cannot Be Established After a Service Cutover Is Performed
Symptom

On the network shown in Figure 4-43, Router A and C are Huawei products, and Router B and the switch are produced by another vendor.

Figure 4-43 OSPF networking

During a service cutover, the optical fiber connecting Router C to the switch is removed from Router C and installed on Router A. Before the cutover, Router C and Router B establishes an OSPF neighbor relationship. After the cutover, Router A and Router B cannot establish an OSPF neighbor relationship, and their OSPF neighbor relationship is in the Exchange state.

The interface configurations on Router C and Router A are correct.

Router C's interface configuration is as follows:

#
interface Vlanif351
description XXXX
ip address x.x.x.158 255.255.255.252
ospf cost 30
mpls
mpls ldp
#

Router A's interface configuration is as follows:

interface GigabitEthernet3/1/15.351
mtu 1560
description XXXX
control-vid 351 dot1q-termination
dot1q termination vid 351
ip address x.x.x.158 255.255.255.252
pim sm
ospf cost 30
ospf mtu-enable
mpls
mpls ldp
arp broadcast enable
#
Trap Information

N/A

Cause Analysis

If OSPF packets are dropped, the OSPF neighbor relationship to remain in the Exchange state. Perform the following steps to analyze the cause:

  1. Configure Router A to send ping packets to Router B.

    The ping is successful. The route between Router A and Router B is reachable.

  2. Check whether MTU negotiation is successful.

    Run the display ospf error command on Router A. The MTU option mismatch field is 0, which indicates that MTU negotiation is successful.

  3. Check whether interface MTU values match.

    Both interface MTU values configured on Router A and Router C are 1560 bytes.

    Enable Router A to send 1560-byte ping packets to Router B. The ping fails.

    Check the JumboOctets count. The outbound statistics change, but inbound statistics remain, which indicates that ICMP messages are sent successfully but there is no reply received.

  4. Change the interface MTUs on Router A and Router B to 1500 bytes.

    An OSPF neighbor relationship is successfully established between the two routers.

  5. The preceding analysis shows that the switch may have a configuration error.

  6. Check the MTU setting on the switch.

    The MTU value on the switch is set to 1546 bytes, which is different from the MTU values on the routers.

    The preceding analysis shows that the incorrect MTU setting on the switch causes the problem.

Troubleshooting Procedure
  1. Check the MTU definition on the switch.

    Revert the MTU value to 1560 bytes. Use the bisection method to enable the switch to send ping packets with the sizes ranging from 1500 to 1560 bytes. The ping results show that a maximum number of 1518 bytes can be sent.

    As the ICMP message size specified in a ping packet is 1518 bytes, the IP packet size is 1546 bytes, which is equal to the switch MTU. The IP packet size includes the 1518-byte ICMP message, 8-byte ICMP header, and 20-byte IP header.

    In conclusion, the switch's MTU is equal to the IP MTU and has the same meaning as the ME60's interface MTU.

  2. Change the MTU on the switch to 1560 bytes. The OSPF neighbor relationship between Router A and Router B is successfully established, and the problem is resolved.

Suggestion

To analyze how a vendor-specific device defines a packet size, use the bisection method to send ping packets with various sizes to find the maximum number of bytes that can be sent and analyze the ping packet structure.

Translation
Download
Updated: 2019-06-11

Document ID: EDOC1000175918

Views: 13768

Downloads: 257

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next