NetEngine 8000 M14, M8 and M4 V800R023C10SPC500 Configuration Guide

NQA Configuration

NQA Configuration

This chapter describes how to configure Network Quality Analysis (NQA) to monitor the network operating status and collect network operating indicators in real time.

Overview of NQA

This section describes the background and functions of network quality analysis (NQA).

As carriers' value-added services develop, both carriers and users alike place higher requirements on quality of service (QoS). With conventional IP networks now carrying voice and video services, it has become commonplace for carriers and their customers to sign Service Level Agreements (SLAs).

To allow users to check whether the committed bandwidth meets requirements, carriers must configure devices to collect statistics about network indicators, such as delay, jitter, and packet loss rate. In this way, network performance can be promptly analyzed.

The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M provides NQA to meet the preceding requirements.

NQA measures the performance of each protocol running on a network and helps carriers collect network operation indicators, such as the delay of a TCP connection, packet loss rate, and path maximum transmission unit (MTU). Carriers provide users with differentiated services and charge users differently based on these indicators. NQA is also an effective tool to diagnose and locate faults on a network.

The following parameters are commonly used for NQA configuration:

  • frequency: specifies the interval at which an NQA test instance is automatically executed.
  • interval: specifies the interval at which NQA test packets are sent.
  • jitter-packetnum: specifies the number of packets sent in each NQA test.
  • probe-count: specifies the number of test probes for an NQA test instance.
  • timeout: specifies the timeout interval of an NQA test instance.

Figure 18-94 shows the relationship between these parameters.

Figure 18-94 Relationships between common NQA parameters

Configuration Precautions for NQA

Feature Requirements

Table 18-32 Feature requirements

Feature Requirements

Series

Models

During an ISSU, the statistics of the following features are inaccurate. After the ISSU is complete, the statistics can be collected normally:

Y.1564, RFC 2544, IFIT, IPFPM, Y.1731, and TWAMP

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

When the RFC 2544 initiator is bound to a flexible access sub-interface to initiate a measurement, configuring static ARP entry learning for the source and destination IP addresses causes a failure in learning the RFC 2544 measurement path.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The RFC 2544 initiator's inward measurement supports flexible access sub-interfaces. If different sub-interfaces of an interface are configured with the same VLAN, only one sub-interface can be measured.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. This function is used for Layer 3 multicast on IPv4 public networks and Rosen public networks and applies to scenarios where traffic is transmitted at a constant rate when the control plane is stable.

2. Due to errors, MTrace cannot be used to locate the scenario where a small number of packets are lost randomly.

3. Currently, only the (S, G) count and traffic rate statistics when the group address parameter is carried are supported. Statistics cannot be collected when the group address parameter is not carried.

4. This function is not applicable to the scenario where the same (S, G) traffic passes through multiple upstream boards (e.g. private network-based load balancing performed on the Rosen public network on a new board in NGSF mode). In this scenario, the rate statistics are inaccurate. Therefore, you are advised not to run the mtrace command.

5. If a device that does not support traffic rate statistics collection exists on the MTrace link, the test result of the device cannot be used as reference.

6. During an active/standby switchover, the MTrace statistics are inaccurate and cannot be used as reference.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

L2VPN PW ping/tracert can test a PW with only one segment, not a PW with multiple segments.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In ping and trace tests, IPv6 SIDs support only unicast addresses.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

Only direct connections are required in the scenario where static routes are associated with NQA and NQA paths depend on static routes.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. Y.1564 supports only unicast detection and does not support broadcast detection.

2. Y.1564 does not support flexible access of EVC sub-interfaces.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

During the detection of a PW path using a ping test, if the remote IP address is changed to be different from the peer IP address specified in the ping command, the result of the initiated ping test remains unchanged. You need to initiate a ping test again. Otherwise, the ping test result is affected.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

For the ping/tracert ipv6-sid function, if the outbound interface of the End.OP SID does not have an IPv6 global address, the source address carried in the detection packet sent by the local end may be an unreachable IPv6 address. As a result, the test fails, but service traffic forwarding is not affected. You are advised to specify -a to initiate a test.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In an EVPN/L3VPN over SRv6 TE Policy scenario, when the L3VPN mode is uniform and a tracert test is performed between PEs, information about intermediate hops is not displayed.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The tracert lsp command is used to detect an LSP tunnel. Inter-AS P nodes are not supported. If timeout is displayed in the command output on an inter-AS P node, the tracert test skips the inter-AS P node and continues to detect the egress.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

When L2VPN PW ping/tracert is used to check the public network backup plane in multi-tunnel scenarios, only the outer backup plane can be tested (in VPLS and VLL scenarios).

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The server and client of an ICMP jitter, UDP jitter, or TCP test instance must be VRP-based devices.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

Whether an ICMP jitter, UDP jitter, or TCP test is successful depends on the board hardware capability of the device.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In a scenario where FRR is configured on the headend of an SRv6 TE Policy, ping/trace cannot be performed on the SRv6 TE Policy during FRR switching.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. A third-party responder does not send response packets carrying payload that contains Huawei proprietary extended information. In this scenario, the delay and jitter of ICMP jitter single-ended detection are large in inter-board scenarios.

2. A third-party responder does not support ICMP timestamp packets (standard non-private packets). ICMP timestamp packets are parsed as ICMP echo packets. In this scenario, ICMP jitter single-ended detection supports only two-way delay measurement but does not support one-way delay measurement.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. BIERv6 ping allows return traffic to be transmitted through routes only, not through tunnels.

2. In load balancing scenarios, only one path can be detected.

3. The specified target BFR IDs must be in the same set.

4. A correct BSL must be specified. Otherwise, the ping test may fail.

5. BIERv6 ping does not support fragmentation.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In a segment-routing ping/trace test delivered by the NMS MIB, the default source address carried in the test packet sent by the local end may be an unreachable IPv4 address. As a result, the test fails, but service traffic forwarding is not affected. You are advised to specify a source address, for example, an LSR ID, to initiate a test.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In EVPN VPLS MAC ping and EVPN VPWS ping/tracert inter-AS active-active scenarios, Huawei devices can interwork with only Huawei devices.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. BIERv6 tracert allows return traffic to be transmitted through routes only, not through tunnels.

2. In load balancing scenarios, only one path can be detected.

3. A correct BSL must be specified. Otherwise, the tracert operation may fail.

4. BIERv6 tracert does not support packet fragmentation.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

In a cross-domain or private L3VPN detection scenario where a P node does not have any return route but supports MPLS forwarding, when the tracert ipv4 command is run to test an MPLS tunnel, the display of cross-domain P node information depends on whether the associated board supports tracert fast reply.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1. IPv6 addresses can only be set to global addresses but not link-local addresses.

2. The ICMP jitter IPv6 instance supports echo packets and does not support timestamp packets.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

Ping function for BRAS user access in CU separation scenarios:

1. When a VNF functions as the initiator, the ping operation can be initiated only on the CP by default. If the ping operation is initiated on the UP, an error is reported.

2. When a VNF functions as a ping responder, only fast reply is supported. In this scenario, the icmp-reply fast command must be run on the VNF to enable the fast reply function. The responder discards request packets in non-fast reply scenarios.

3. In CU separation scenarios, BRAS users cannot initiate ping operations on a UP. After the set access ping-packet to-up command is run in the diagnostic view, a VNF functioning as an initiator can initiate ping operations for a user IP address on a UP. During the command execution, the ping operation initiated by the CP becomes invalid. By default, the command execution becomes invalid after 15 minutes.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The destination IP address configured on the client does not support the scenario where the outbound interface of the route is the management network interface. Do not deploy any test instance for which the outbound interface of a route is a management interface. Otherwise, hardware-based packet sending measurement results obtained during a jitter test are inaccurate.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

1588v2 clock synchronization must be deployed for one-way delay measurement. Otherwise, the measurement result of jitter hardware-based packet sending is inaccurate.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

NQA ICMP jitter supports hardware-based packet sending and reflector interconnection.

The reflector must support standard ICMP timestamp packets. If the reflector does not support ICMP timestamp packets, the reflector cannot be interconnected. Additionally, when replying with an ICMP timestamp response packet, the reflector must include the request packet's private extended load information in the response packet. If the reflector allows load information to be carried, the hardware-based packet sending process is used and the delay is more accurate. If the reflector does not allow load information to be carried, the software-based packet sending process is used.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

If the server end supports hardware-based packet reply, the board that replies packets must support 1588v2 clock synchronization.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The client supports hardware-based packet sending, and the board must support 1588v2 clock synchronization. Select a board that supports 1588v2 clock synchronization for packet sending. Otherwise, hardware-based packet sending measurement results obtained during a jitter test are inaccurate.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

The MTU of the tested link cannot be less than 64. Properly plan the MTU on the network link to be measured. Otherwise, the jitter measurement result is inaccurate.

NetEngine 8000 M

NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000 M4/NetEngine 8000E M14/NetEngine 8100 M8/NetEngine 8000 M14K/NetEngine 8100 M14/NetEngine 8000E M8/NetEngine 8000 M14

Configuring NQA to Monitor an IP Network

NQA test instances can be used to monitor IP networks. Before configuring NQA, familiarize yourself with the usage scenario of each test instance and complete the pre-configuration tasks.

Usage Scenario

Table 18-33 NQA test instances used to monitor IP networks

Test Type

Usage Scenario

DNS test

This section describes how to configure a DNS test to detect the speed at which a DNS name is resolved to an IP address.

ICMP test

An Internet Control Message Protocol (ICMP) test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network from end to end.

TCP test

A TCP test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network through a TCP connection.

UDP test

A UDP test can be used to measure the round-trip delay (RTD) of UDP packets exchanged between Huawei devices.

SNMP test

An NQA SNMP test can be used to measure the communication speed between a host and an SNMP agent using UDP packets.

Trace test

A trace test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network hop by hop. It can also monitor the packet forwarding path.

UDP jitter test

A UDP jitter test can be used to measure the end-to-end jitter of various services. It can also simulate a voice test. A UDP jitter test can be used when an ICMP jitter test cannot be used due to the ICMP reply function being disabled on network devices for security purposes.

ICMP Jitter test

An Internet Control Message Protocol (ICMP) Jitter test can be used to measure the end-to-end jitter of various services.

Path Jitter test

An NQA path jitter test instance, however, can identify the router whose jitter value is great.

Path MTU test

A path MTU test can obtain the maximum MTU value that does not require packet fragmentation during the packet transmission on the link.

Pre-configuration Tasks

Before configuring NQA to monitor an IP network, configure static routes or an Interior Gateway Protocol (IGP) to ensure IP route reachability among nodes.

Configuring a DNS Test

This section describes how to configure a DNS test to detect the speed at which a DNS name is resolved to an IP address.

Context

A DNS test is based on UDP packets. Only one probe packet is sent in one DNS test to detect the speed at which a DNS name is resolved to an IP address. The test result clearly reflects the performance of the DNS protocol on the network.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run dns resolve

    Dynamic DNS is enabled.

  3. Run dns server ip-address [ vpn-instance vpn-name ]

    An IP address is configured for the DNS server.

  4. Run dns server source-ip ipv4Addr

    The IP address of the local DNS client is configured as the source address for DNS communication.

  5. Create an NQA test instance and set its type to DNS.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type dns command to set the test instance type to DNS.
    3. (Optional) Run the description description command to configure a description for the test instance.
  6. Run dns-server ipv4 ip-address

    An IP address is configured for the DNS server in the DNS test instance.

  7. Run destination-address url urlValue

    A destination URL is specified for the NQA test instance.

  8. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the agetime ageTimeValue command to set the aging time of the NQA test instance.
    2. Run the source-address ipv4 srcAddress command to configure a source IP address for the DNS test instance.
  9. (Optional) Configure a test failure condition.

    Run timeout time

    A timeout period is configured for a response packet.

  10. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  11. (Optional) Configure the device to send trap messages.
    1. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    2. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    3. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
  12. (Optional) Run vpn-instance vpn-instance-name

    A VPN instance name is configured for the NQA test instance.

  13. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  14. Run commit

    The configuration is committed.

Configuring an ICMP Test

An Internet Control Message Protocol (ICMP) test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network from end to end.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to ICMP.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type icmp command to set the test instance type to ICMP.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. Run destination-address { ipv4 destAddress | ipv6 destAddress6 }

    A destination address (i.e., IP address of the NQA server) is configured for the test instance.

  4. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the agetime ageTimeValue command to set the aging time of the NQA test instance.
    2. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    3. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    4. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    5. Run the interval seconds interval command to set the interval at which NQA test packets are sent.

    6. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

    7. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to configure a source IP address for NQA test packets.
    8. Run the source-interface ifType ifNum command to specify a source interface for NQA test packets.
    9. Run the tos tos-value [ dscp ] command to configure a ToS value for NQA test packets.

    10. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

    11. Run the nexthop { ipv4 ipv4Address | ipv6 ipv6Address } command to configure a next-hop address for the test instance.
  5. (Optional) Run forwarding-simulation inbound-interface { ifName | ifType ifNum }

    The inbound interface for simulated packets is configured.

  6. (Optional) Run path-type bypass

    The device is configured to send Echo Request packets through an IP fast reroute (FRR) bypass LSP.

  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

      If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

  8. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.
    1. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    2. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    3. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
  10. (Optional) Run vpn-instance vpn-instance-name

    A VPN instance name is configured for the NQA test instance.

  11. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

      If the following conditions are met, the Completion field in the test results may be displayed as no result:

      • frequency < (probe-count – 1) x interval + timeout +1

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  12. (Optional) In the system view, run whitelist session-car { nqa-icmp { cir cir-value | cbs cbs-value | pir pir-value | pbs pbs-value } * | nqa-icmpv6 { cir cir-value | cbs cbs-value | pir pir-value | pbs pbs-value } * }

    The session-CAR value of the ICMP test instance is adjusted.

    The session CAR function is enabled by default. If the session CAR function is abnormal, you can run the whitelist session-car { nqa-icmp | nqa-icmpv6 } disable command to disable it.

  13. Run commit

    The configuration is committed.

Configuring a TCP Test

A TCP test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network through a TCP connection.

Procedure

  • Configure an NQA server for the TCP test.

    1. Run system-view

      The system view is displayed.

    2. Run nqa-server tcpconnect [ vpn-instance vpn-instance-name ] ip-address port-number

      The IP address and port number used to monitor TCP services are specified on the NQA server.

    3. Run commit

      The configuration is committed.

  • Configure an NQA client for the TCP test.
    1. Run system-view

      The system view is displayed.

    2. Create an NQA test instance and set its type to TCP.

      1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.

      2. Run the test-type trace command to set the test instance type to TCP.
      3. (Optional) Run the description description command to configure a description for the test instance.

    3. Specify a destination address and a destination port number for the TCP test instance.

      The destination address and destination port number specified in this step must be the same as ip-address and port-number specified on the NQA server in Step 2.

      1. Run the destination-address ipv4 destAddress command to configure a destination address (i.e., IP address of the NQA server) for the test instance.
      2. (Optional) Run the destination-port port-number command to specify a destination port number for the NQA test instance.

    4. (Optional) Set optional parameters for the test instance and simulate real service flows.

      1. Run the probe-count number command to set the number of probes in a test.

      2. Run the interval { milliseconds interval | seconds interval } command to configure an interval at which test packets are sent for the NQA test instance.

      3. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

      4. Run the source-address ipv4 srcAddress command to specify a source IP address for NQA test packets.

      5. Run the source-port portValue command to configure a source port number for the test.

      6. Run the tos tos-value command to set a ToS value for NQA test packets.

      7. Run the ttl ttlValue command to set a TTL value for NQA test packets.

    5. (Optional) Configure test failure conditions.

      • Run the timeout time command to configure a timeout period for response packets.

        If no response packets are received within the timeout period, the probe fails.

      • Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

        If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

    6. (Optional) Configure NQA statistics collection.

      Run records { history number | result number }

      The maximum number of historical records and the maximum number of result records that can be saved for the NQA test instance are set.

    7. (Optional) Configure the device to send trap messages.

      1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

      2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

      3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
      4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

    8. (Optional) Run vpn-instance vpn-instance-name

      A VPN instance name is configured for the NQA test instance.

    9. Schedule the NQA test instance.

      1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

      2. Run the start command to start the NQA test instance.

        The start command has multiple formats. Choose one of the following formats as needed.

        • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

    10. Run commit

      The configuration is committed.

Configuring a UDP Test

A UDP test can be used to measure the round-trip delay (RTD) of UDP packets exchanged between Huawei devices.

Procedure

  • Configure an NQA server for the UDP test.

    1. Run system-view

      The system view is displayed.

    2. Run nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address port-number

      The IP address and port number used to monitor UDP services are specified on the NQA server.

    3. Run commit

      The configuration is committed.

  • Configure an NQA client for the UDP test.
    1. Run system-view

      The system view is displayed.

    2. Create an NQA test instance and set its type to UDP.

      1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
      2. Run the test-type udp command to set the test instance type to UDP.
      3. (Optional) Run the description description command to configure a description for the test instance.

    3. Specify a destination IP address and a destination port number for the UDP test instance.

      1. Run the destination-address { ipv4 destAddress | ipv6 destAddress6 } command to configure a destination address (i.e., IP address of the NQA server) for the test instance.
      2. (Optional) Run the destination-port port-number command to configure a destination port number for the NQA test instance.

    4. (Optional) Set optional parameters for the test instance and simulate real service flows.

      1. Run the agetime ageTimeValue command to configure an aging time for the NQA test instance.

      2. Run the datafill fill-string command to configure the padding characters to be filled into test packets.

      3. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.

      4. Run the probe-count number command to configure the number of probes in a test.

      5. Run the interval seconds interval command to set the interval at which NQA test packets are sent.

      6. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

      7. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to configure a source IP address for NQA test packets.
      8. Run the source-port portValue command to configure a source port number for the test.

      9. Run the tos tos-value command to configure a ToS value for NQA test packets.

      10. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

    5. (Optional) Configure probe failure conditions.

      • Run the timeout time command to configure a timeout period for response packets.

        If no response packets are received within the timeout period, the probe fails.

      • Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

        If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

    6. (Optional) Run records { history number | result number }

      The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

    7. (Optional) Configure the device to send trap messages.

      1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

      2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

      3. Run the threshold rtd thresholdRtd command to configure an RTD threshold.
      4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

    8. (Optional) Run vpn-instance vpn-instance-name

      A VPN instance name is configured for the NQA test instance.

    9. Schedule the test instance.

      1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

      2. Run the start command to start the NQA test instance.

        The start command has multiple formats. Choose one of the following formats as needed:

        • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

    10. Run commit

      The configuration is committed.

Configuring an SNMP Test

An NQA SNMP test can be used to measure the communication speed between a host and an SNMP agent using UDP packets.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to SNMP.

    Before configuring an NQA SNMP test instance, configure SNMP. The NQA SNMP test instance supports SNMPv1, SNMPv2c, and SNMPv3.

    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type snmp command to set the test instance type to SNMP.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. Run destination-address ipv4 destAddress

    A destination address (i.e., IP address of the NQA server) is configured for the test instance.

  4. (Optional) Run community read cipher community-name

    A community name is specified for the SNMP test instance.

    If a target SNMP agent runs SNMPv1 or SNMPv2c, the read community name specified using the community read cipher command must be the same as the read community name configured on the SNMP agent. Otherwise, the SNMP test will fail.

  5. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    2. Run the interval seconds interval command to set the interval at which NQA test packets are sent.

    3. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

    4. Run the source-address ipv4 srcAddress command to specify a source IP address for NQA test packets.

    5. Run the source-port portValue command to configure a source port number for the test.

    6. Run the tos tos-value command to configure a ToS value for NQA test packets.

    7. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

  6. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

      If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

  7. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  8. (Optional) Configure the device to send trap messages.
    1. Run the probe-failtimes failTimes command to enable the device to send traps to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.
    2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
  9. (Optional) Run vpn-instance vpn-instance-name

    A VPN instance name is configured for the NQA test instance.

  10. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run the commit command to commit the configuration.

Configuring a Trace Test

A trace test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an IP network hop by hop. It can also monitor the packet forwarding path.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to trace.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type trace command to set the test instance type to trace.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. Specify the destination address and destination port number for the test instance.

    1. Run the destination-address { ipv4 destAddress | ipv6 destAddress6 } command to configure a destination address (i.e., IP address of the NQA server) for the test instance.
    2. (Optional) Run the destination-port port-number command to specify a destination port number for the NQA test instance.

  4. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the agetime ageTimeValue command to set the aging time of the NQA test instance.
    2. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    3. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    4. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    5. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

    6. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to configure a source IP address for NQA test packets.
    7. Run the source-interface ifType ifNum command to specify a source interface for NQA test packets.
    8. Run the tos tos-value [ dscp ] command to configure a ToS value for NQA test packets.

    9. Run the nexthop { ipv4 ipv4Address | ipv6 ipv6Address } command to configure a next-hop address for the test instance.
    10. Run the tracert-livetime first-ttl first-ttl max-ttl max-ttl command to set a TTL value for test packets.
  5. (Optional) Run set-df

    Packet fragmentation is disabled.

    Use a trace test instance to obtain the path MTU as follows:

    Run the set-df command to disable packet fragmentation. Then, run the datasize command to set the size of the Data field in a test packet. After that, start the test instance. If the test is successful, the size of the Data field in sent packets is smaller than the path MTU. Then, keep increasing the packet data area size using the datasize command until the test fails. If the test fails, the size of the sent packet's data area is greater than the path MTU. The maximum size of the packet that can be sent without being fragmented is used as the path MTU.

  6. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the tracert-hopfailtimes hopfailtimesValue command to set the maximum number of hop failures in a probe for the test instance.
  7. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  8. (Optional) Configure the device to send trap messages.
    1. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    2. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    3. Run the send-trap { all | { rtd | testfailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
  9. (Optional) Run vpn-instance vpn-instance-name

    A VPN instance name is configured for the NQA test instance.

  10. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Configuring an ICMP Jitter Test

An Internet Control Message Protocol (ICMP) Jitter test can be used to measure the end-to-end jitter of various services.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to ICMP Jitter.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type icmpjitter command to set the test instance type to ICMP jitter.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. Run destination-address { ipv4 destAddress | ipv6 destAddress6 }

    A destination address (i.e., IP address of the NQA server) is configured for the test instance.

  4. (Optional) Run hardware-based enable

    Hardware-based packet sending is enabled on an interface board.

    • Hardware-based packet sending on an interface board is not supported for IPv6 ICMP Jitter tests.
    • You are advised to configure hardware-based packet sending on an interface board to implement more accurate delay and jitter calculation, facilitating high-precision network monitoring.
    • After hardware-based packet sending is enabled on the involved interface board on the client, you need to run the nqa-server icmp-server [ vpn-instance vpn-instance-name ] ip-address command on the NQA server to specify the IP address used to monitor ICMP services.

  5. (Optional) Set timestamp units for the NQA test instance.

    The timestamp units need to be configured only after the hardware-based enable command is run.

    1. Run the timestamp-unit { millisecond | microsecond } command to configure a timestamp unit for the source in the NQA test instance.
    2. Run the receive-timestamp-unit { millisecond | microsecond } command to configure a timestamp unit for the destination in the NQA test instance.

      In a scenario where a Huawei device interworks with a non-Huawei device, if an ICMP jitter test in which the Huawei device functions as the source (client) is configured to measure the delay, jitter, packet loss, and other indicators on the network, you need to run this command to configure a timestamp unit for the ICMP timestamp packets returned by the destination.

      The source's timestamp unit configured using the timestamp-unit { millisecond | microsecond } command must be the same as the destination's timestamp unit configured using the receive-timestamp-unit command. If the timestamp unit is set to microseconds and the interface board's precision supported by the device is milliseconds, the device uses milliseconds as the timestamp unit.

  6. Set optional parameters for the test instance and simulate real service flows.
    1. Run the agetime ageTimeValue command to set the aging time of the NQA test instance.
    2. Run the icmp-jitter-mode { icmp-echo | icmp-timestamp } command to specify an ICMP jitter test mode for the NQA test instance.

      This function is not supported on IPv6 networks.

    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.

      This parameter can be configured only when the ICMP jitter test mode is set to icmp-echo.

    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.

      This parameter can be configured only when the ICMP jitter test mode is set to icmp-echo.

    5. Run the jitter-packetnum packetNum command to configure the number of packets sent in a probe.
    6. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    7. Run the interval { milliseconds interval | seconds interval } command to set the interval at which NQA test packets are sent.

    8. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to configure a source IP address for NQA test packets.
    9. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

    10. Run the tos tos-value command to configure a ToS value for NQA test packets.

  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

      If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

  8. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.
    1. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    2. Run the threshold { owd-ds thresholdOwdDS | owd-sd thresholdOwdSD | rtd thresholdRtd | jitter-ds thresholdJitDS | jitter-sd thresholdJitSD } command to configure the thresholds for round-trip delay (RTD), one-way delay (OWD), and one-way jitter.
    3. Run the send-trap { all | { rtd | testfailure | testcomplete | owd-sd | owd-ds | jitter-sd | jitter-ds | testresult-change }* } command to configure conditions for sending trap messages.
  10. (Optional) Run vpn-instance vpn-instance-name

    A VPN instance name is configured for the NQA test instance.

  11. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  12. Run commit

    The configuration is committed.

Configuring a UDP Jitter Test

A UDP jitter test can be used to measure the end-to-end jitter of various services. It can also simulate a voice test. A UDP jitter test can be used when an ICMP jitter test cannot be used due to the ICMP reply function being disabled on network devices for security purposes.

Procedure

  • Configure an NQA server for the UDP jitter test.

    1. Run system-view

      The system view is displayed.

    2. Run either of the following commands according to the IP address type:
      • To specify the IPv4 address and port number used to monitor UDP jitter services on the NQA server, run the nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address port-number command.

      • To specify the IPv6 address and port number used to monitor UDP jitter services on the NQA server, run the nqa-server udpecho [ vpn-instance vpn-instance-name ] ipv6 ipv6-address port-number command.

    3. Run commit

      The configuration is committed.

  • Configure an NQA client for the UDP jitter test.
    1. Run system-view

      The system view is displayed.

    2. (Optional) Run nqa-jitter tag-version version-number

      The packet version is set for the UDP jitter test instance.

      Packet statistics collected in version 2 is more accurate than those in version 1. Packet version 2 is therefore recommended.

    3. Create an NQA test instance and set its type to UDP jitter.

      1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.

      2. Run the test-type jitter command to set the test instance type to UDP jitter.
      3. (Optional) Run the description description command to configure a description for the test instance.

    4. The destination address and destination port number are set for the UDP jitter test instance.

      1. Run the destination-address { ipv4 destAddress | ipv6 destAddress6 } command to configure a destination address (i.e., IP address of the NQA server) for the test instance.
      2. Run the destination-port port-number command to configure a destination port number for the NQA test instance.

    5. (Optional) Run hardware-based enable

      Hardware-based packet sending is enabled on an interface board.

      • Hardware-based packet sending on interface boards is not supported for IPv6 UDP Jitter tests.
      • You are advised to configure hardware-based packet sending on an interface board to implement more accurate delay and jitter calculation, facilitating high-precision network monitoring.

    6. (Optional) Run timestamp-unit { millisecond | microsecond }

      A timestamp unit is configured for the NQA test instance.

      The timestamp unit needs to be configured only after the hardware-based enable command is run.

    7. (Optional) Configure a code type and advantage factor for a simulated voice test.

      1. Run the jitter-codec { g711a | g711u | g729a } command to configure a code type for the simulated voice test.

      2. Run the adv-factor factor-value command to configure an advantage factor for simulated voice test calculation.

    8. (Optional) Set optional parameters for the test instance and simulate real service flows.

      1. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.

      2. Run the jitter-packetnum number command to configure the number of packets sent in a probe.

      3. Run the probe-count number command to configure the number of probes in a test.

      4. Run the interval { milliseconds interval | seconds interval } command to set the interval at which NQA test packets are sent.

      5. Run the sendpacket passroute command to configure the device to send NQA test packets without performing routing table lookup.

        This function is not supported on IPv6 networks.

      6. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to configure a source IP address for NQA test packets.
      7. Run the source-port portValue command to configure a source port number for the test.
      8. Run the tos tos-value command to configure a ToS value for NQA test packets.

      9. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

    9. (Optional) Configure test failure conditions.

      • Run the timeout time command to configure a timeout period for response packets.

        If no response packets are received within the timeout period, the probe fails.

      • Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

        If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

    10. (Optional) Configure NQA statistics collection.

      Run records { history number | result number }

      The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

    11. (Optional) Configure the device to send trap messages.

      1. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.
      2. Run the threshold { owd-ds thresholdOwdDS | owd-sd thresholdOwdSD | rtd thresholdRtd } command to configure thresholds for round-trip delay (RTD) and one-way delay (OWD).
      3. Run the send-trap { all | { owd-ds | owd-sd | rtd | testfailure | testresult-change }* } command to configure conditions for sending trap messages.

    12. (Optional) Run vpn-instance vpn-instance-name

      A VPN instance name is configured for the NQA test instance.

    13. Schedule the NQA test instance.

      1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

      2. Run the start command to start the NQA test instance.

        The start command has multiple formats. Choose one of the following formats as needed.

        • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

        • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

    14. Run commit

      The configuration is committed.

Configuring Parameters for the Path Jitter Test

An NQA path jitter test instance, however, can identify the router whose jitter value is great.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    An NQA test instance is created and the test instance view is displayed.

  3. Run test-type pathjitter

    The type of the test instance is configured as path jitter.

  4. Run destination-address ipv4 destAddress

    The destination IP address is configured.

  5. (Optional) Run the following commands to configure other parameters for the path jitter test:

    • Run icmp-jitter-mode { icmp-echo | icmp-timestamp }

      The mode of the path jitter test is configured.

    • Run vpn-instance vpn-instance-name

      The VPN instance to be tested is configured.

    • Run source-address ipv4 srcAddress

      The source IP address is configured.

    • Run probe-count number

      The number of test probes to be sent each time is set.

    • Run jitter-packetnum packetNum

      The number of test packets to be sent during each test is set.

      The probe-count command is used to configure the number of times for the jitter test and the jitter-packetnum command is used to configure the number of test packets sent during each test. In actual configuration, the product of the number of times for the jitter test and the number of test packets must be less than 3000.

    • Run interval seconds interval

      The interval for sending jitter test packets is set.

      The shorter the interval is, the sooner the test is complete. However, delays arise when the processor sends and receives test packets. Therefore, if the interval for sending test packets is set to a small value, a relatively greater error may occur in the statistics of the jitter test.

    • Run fail-percent percent

      The percentage of the failed NQA tests is set.

  6. Run start

    The NQA test is started.

    Select the start mode as required because the start command has several forms.

    • To perform the NQA test immediately, run the startnow [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started immediately.

    • To perform the NQA test at the specified time, run the startat [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started at a specified time.

    • To perform the NQA test after a certain delay period, run the startdelay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started after a certain delay.

Configuring Parameters for the Path MTU Test

A path MTU test can obtain the maximum MTU value that does not require packet fragmentation during the packet transmission on the link.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    An NQA test instance is created and the test instance view is displayed.

  3. Run test-type pathmtu

    The type of the test instance is configured as path MTU.

  4. Run destination-address ipv4 destAddress

    The destination IP address is configured.

  5. (Optional) Run the following commands to configure other parameters for the path MTU test.

    • Run discovery-pmtu-max pmtu-max

      The maximum value of the path MTU test range is set.

    • Run step step

      The value of the incremental step is set for the packet length in the path MTU test.

    • Run vpn-instance vpn-instance-name

      The VPN instance to be tested is configured.

    • Run source-address ipv4 srcAddress

      The source IP address is configured.

    • Run probe-count number

      The maximum number of probe packets that are allowed to time out consecutively is configured.

  6. Run start

    The NQA test is started.

    Select the start mode as required because the start command has several forms.

    • To perform the NQA test immediately, run the startnow [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started immediately.

    • To perform the NQA test at the specified time, run the startat [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started at a specified time.

    • To perform the NQA test after a certain delay period, run the startdelay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      The test instance is started after a certain delay.

Verifying the Configuration

After completing the test, you can check the test results.

Prerequisites

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ collection ] [ test-instance adminName testName ] command to check NQA test results.
  • Run the display nqa results [ collection ] this command to check NQA test results in a specified NQA test instance view.
  • Run the display nqa history [ test-instance adminName testName ] command to check historical NQA test records.
  • Run the display nqa history [ this ] command to check historical statistics on NQA tests in a specified NQA test instance view.
  • Run the display nqa-server command to check the NQA server status.

Configuring NQA to Monitor an MPLS Network

NQA can be configured to monitor an MPLS network. Before doing so, familiarize yourself with the usage scenario of each test instance and complete the pre-configuration tasks.

Usage Scenario

Table 18-34 NQA test instances used to monitor an MPLS network

Test Type

Usage Scenario

LSP ping test

A label switched path (LSP) ping test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an MPLS network from end to end.

LSP trace test

An LSP trace test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an MPLS network hop by hop. It can also monitor the forwarding path of MPLS packets.

LSP Jitter test

An LSP jitter test can be used to measure the end-to-end jitter of services carried on an MPLS network.

Pre-configuration Tasks

Before configuring NQA to monitor an MPLS network, configuring basic MPLS functions.

Configuring an LSP Ping Test

A label switched path (LSP) ping test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an MPLS network from end to end.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to LSP ping.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type lspping command to set the test instance type to LSP ping.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Run lsp-type { ipv4 | te | bgp | srte | srbe | srte-policy }

    An LSP test type is specified for the NQA test instance.

    If the LSP test type of the NQA test instance is set to srbe, run the following commands as required:

    • Run the remote-fec ldp remoteIpAddr remoteMaskLen command to configure an IP address for a remote FEC.
    • Run the path-type bypass command to configure the device to send Echo Request packets through the bypass LSP.
    • Run the flex-algo flex-algo-id command to specify a Flex-Algo ID for an SR-MPLS BE tunnel that is a Flex-Algo tunnel.

  5. Configure related parameters according to the type of the LSP to be tested.

    • Test an LDP LSP:

      Run the destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback loopbackAddress } ] * command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test a TE tunnel:

      Run the lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command to configure a TE LSP interface.

    • Test a BGP tunnel:

      Run the destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback loopbackAddress } ] * command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test an SR-MPLS TE tunnel:

      Run the lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command to configure a TE tunnel interface.

    • Test an SR-MPLS BE tunnel:

      Run the destination-address ipv4 destAddress lsp-masklen maskLen command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test an SR-MPLS TE Policy:

      Run the policy { policy-name policyname | binding-sid bsid | endpoint-ip endpointip color colorid } command to configure the name, binding segment ID, endpoint IP address, and color ID of an SR-MPLS TE Policy.

  6. (Optional) Run lsp-nexthop nexthop-ip-address

    A next-hop address is specified in the scenario where load balancing is enabled on the ingress of the LSP ping test.

    If load balancing is enabled on the ingress, you can run this command to specify a next-hop address so that packets are transmitted in the specified direction.

  7. Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    5. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    6. Run the interval seconds interval command to set the interval at which NQA test packets are sent.

    7. Run the source-address ipv4 srcAddress command to specify a source IP address for NQA test packets.

    8. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

  8. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

      If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

  9. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  10. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Configuring an LSP Trace Test

An LSP trace test can be used to check the connectivity and measure the packet loss rate, delay, and other indicators of an MPLS network hop by hop. It can also monitor the forwarding path of MPLS packets.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to LSP trace.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type lsptrace command to set the test instance type to LSP trace.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Run lsp-type { ipv4 | te | bgp | srte | srbe | srte-policy }

    An LSP test type is specified for the NQA test instance.

    If the LSP test type of the NQA test instance is set to srbe, run the following commands as required:

    • To configure an IP address for a remote FEC, run the remote-fec ldp remoteIpAddr remoteMaskLen command.
    • To configure the device to send Echo Request packets through the bypass LSP, run the path-type bypass command.
    • To specify a Flex-Algo ID for an SR-MPLS BE tunnel that is a Flex-Algo tunnel, run the flex-algo flex-algo-id command.

  5. Configure related parameters according to the type of the LSP to be tested.

    • Test an LDP LSP:

      Run the destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback loopbackAddress } ] * command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test a TE tunnel:

      Run the lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command to configure a TE tunnel interface.

    • Test a BGP tunnel:

      Run the destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback loopbackAddress } ] * command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test an SR-MPLS TE tunnel:

      Run the lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command to configure a TE tunnel interface.

    • Test an SR-MPLS BE tunnel:

      Run the destination-address ipv4 destAddress lsp-masklen maskLen command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test an SR-MPLS TE Policy:

      Run the policy { policy-name policyname | binding-sid bsid | endpoint-ip endpointip color colorid } command to configure the name, binding segment ID, endpoint IP address, and color ID of an SR-MPLS TE policy.

  6. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    4. Run the source-address ipv4 srcAddress command to specify a source IP address for NQA test packets.

    5. Run the tracert-livetime first-ttl first-ttl max-ttl max-ttl command to set a TTL value for test packets.
  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the tracert-hopfailtimes hopfailtimesValue command to set the maximum number of hop failures in a probe for the test instance.
  8. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  10. Run commit

    The configuration is committed.

Configuring an LSP Jitter Test

An LSP jitter test can be used to measure the end-to-end jitter of services carried on an MPLS network.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to LSP jitter.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type lspjitter command to set the test instance type to LSP jitter.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Run lsp-type { ipv4 | te }

    An LSP test type is specified for the NQA test instance.

  5. Configure the destination address or tunnel interface based on the type of the LSP to be tested.

    • Test an LDP LSP:

      Run the destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback loopbackAddress } ] * command to configure a destination address (i.e., IP address of the NQA server) for the NQA test instance.

    • Test a TE tunnel:

      Run the lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command to configure a TE tunnel interface.

  6. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    5. Run the jitter-packetnum packetNum command to configure the number of packets to be sent in each probe.
    6. Run the probe-count number command to set the number of probes in a test for the NQA test instance.

    7. Run the interval seconds interval command to configure the interval at which NQA test packets are sent.
    8. Run the source-address ipv4 srcAddress command to specify a source IP address for NQA test packets.

    9. Run the ttl ttlValue command to configure a TTL value for NQA test packets.

  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.

      If the percentage of failed probes to total probes is greater than or equal to the configured failure percentage, the test is considered as a failure.

  8. (Optional) Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.

    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following formats as needed:

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  10. Run the commit command to commit the configuration.

Verifying the Configuration

After completing the test, you can check the test results.

Prerequisites

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ collection ] [ test-instance adminName testName ] command to check NQA test results.
  • Run the display nqa results [ collection ] this command to check NQA test results in a specified NQA test instance view.
  • Run the display nqa history [ test-instance adminName testName ] command to check historical NQA test records.
  • Run the display nqa history [ this ] command to check historical statistics on NQA tests in a specified NQA test instance view.

Configuring NQA to Monitor a VPN

Before configuring NQA to monitor a virtual private network (VPN), familiarize yourself with the usage scenario of each test instance and complete the pre-configuration tasks.

Usage Scenario

Table 18-35 NQA tests used to monitor VPNs

Test Type

Usage Scenario

PWE3 ping test

A PWE3 ping test helps check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a virtual private wire service (VPWS) network.

VPLS PW ping test

A virtual private local area network service (VPLS) pseudo wire (PW) ping test can be used to check the PW connectivity and measure the packet loss rate, delay, and other indicators of a VPLS network.

VPLS MAC ping test

A VPLS MAC ping test can be used to check the connectivity of Layer 2 forwarding links on a VPLS network.

PWE3 trace test

A Pseudowire Emulation Edge-to-Edge (PWE3) trace test can be used to check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a virtual private wire service (VPWS) network. It can also be used to check the forwarding path of test packets.

VPLS PW trace test

A VPLS PW trace test can be used to check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a VPLS network. It can also be used to check the forwarding path of test packets.

Pre-configuration Tasks

Before you configure NQA to check VPNs, configure basic VPN functions.

Configuring a PWE3 Ping Test

A PWE3 ping test helps check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a virtual private wire service (VPWS) network.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to PWE3 ping.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type pwe3ping command to set the test instance type to PWE3 ping.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Set parameters for the L2VPN network to be tested.
    1. Run the vc-type ldp command to set the VC protocol type to LDP.
    2. Run the local-pw-type pwTypeValue command to configure an encapsulation type for the local PW.
    3. Run the label-type { control-word | { { label-alert | normal } [ no-control-word ] } } command to configure a packet encapsulation type.
    4. Run the local-pw-id pwIdValue command to configure a local PW ID.
    5. (Optional) Run the peer-address peeraddress command to configure the peer IP address.
    6. (Optional) Run the ttl-copymode { pipe | uniform } command to specify a TTL propagation mode.
  5. (Optional) Configure information about the remote PE when a multi-segment PW (MS-PW) is to be tested.

    An MS-PW can be tested only after you specify control-word or normal.

    1. Run the remote-pw-id pwIdValue command to configure a PW ID for the remote PE.
    2. Run the destination-address ipv4 destAddress command to configure an IP address for the remote PE.
    3. (Optional) Run the sender-address ipv4 ip-address command to configure a source IP address for the public network session between the device and the remote PE. This IP address is usually the IP address of a superstratum provider edge (SPE) that switches and forwards adjacent labels or a user-end provider edge (UPE), which is an edge device on a backbone network

      The sender-address command needs to be configured only when a Huawei device interworks with a non-Huawei device.

  6. Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    5. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    6. Run the interval seconds interval command to set the interval at which NQA test packets are sent.
    7. Run the ttl ttlValue command to configure a TTL value for NQA test packets.
  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.
    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.
  8. Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.

    1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

    2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

  10. Schedule the test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start an NQA test instance.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Configuring a PWE3 Trace Test

A Pseudowire Emulation Edge-to-Edge (PWE3) trace test can be used to check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a virtual private wire service (VPWS) network. It can also be used to check the forwarding path of test packets.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to PWE3 trace.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type pwe3trace command to set the test instance type to PWE3 trace.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Set parameters for the Layer 2 virtual private network (L2VPN) to be tested.
    1. Run the vc-type ldp command to set the VC protocol type to LDP.
    2. Run the local-pw-type pwTypeValue command to configure a PW encapsulation type for the local PE.
    3. Run the label-type { control-word | {{ label-alert | normal } [ no-control-word ] } } command to configure a packet encapsulation type.
    4. Run the lsp-version { rfc4379 | ptn-mode } command to specify a protocol for the LSP test instance.
    5. Run the local-pw-id pwIdValue command to configure a PW ID for the local PE.
    6. (Optional) Run the peer-address peeraddress command to configure the peer IP address.
    7. (Optional) Run the ttl-copymode { pipe | uniform } command to specify a TTL propagation mode.
  5. (Optional) Run destination-address ipv4 destAddress

    An IP address is configured for the remote PE.

    To monitor a multi-segment PW, you need to configure remote PE information. An MS-PW can be tested only after you specify control-word or normal.

  6. Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    4. Run the tracert-livetime first-ttl first-ttl max-ttl max-ttl command to configure the lifetime of the NQA test instance.
  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the tracert-hopfailtimes hopfailtimesValue command to set the maximum number of hop failures in a probe for the test instance.
  8. Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.

    1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

    2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

  10. Schedule the test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start an NQA test instance.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Configuring a VPLS PW Ping Test

A virtual private local area network service (VPLS) pseudo wire (PW) ping test can be used to check the PW connectivity and measure the packet loss rate, delay, and other indicators of a VPLS network.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set the test instance type to VPLS PW ping.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type vplspwping command to set the test instance type to VPLS PW ping.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Set parameters for the VPLS network to be tested. Specifically:
    1. Run the vc-type ldp command to set the VC protocol type to LDP.
    2. Run the vsi vsi-name command to configure the name of a VSI to be tested.
    3. Run the destination-address ipv4 destAddress command to configure an IP address for the remote PE.
  5. (Optional) Configure information about the remote PE when an MS-PW is to be tested.

    An MS-PW can be monitored only after you specify control-word.

    1. Run the label-type { label-alert | control-word } command to configure a packet encapsulation type.
    2. (Optional) Run the local-pw-id pwIdValue command to configure a PW ID for the local PE.

      If the VSI configured using the vsi vsi-name command has a specified negotiation-vc-id, the local-pw-id pwIdValue command must be run.

    3. Run the remote-pw-id pwIdValue command to configure a PW ID for the remote PE.
    4. (Optional) Run the sender-address ipv4 ip-address command to configure a source IP address for the public network session between the device and the remote PE. This IP address is usually the IP address of a superstratum provider edge (SPE) that switches and forwards adjacent labels or a user-end provider edge (UPE), which is an edge device on a backbone network

      The sender-address command needs to be configured only when a Huawei device interworks with a non-Huawei device.

  6. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    5. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    6. Run the interval seconds interval command to set the interval at which NQA test packets are sent.
    7. Run the ttl ttlValue command to configure a TTL value for NQA test packets.
  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.
    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.
  8. (Optional) Configure NQA statistics collection.

    Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.

    1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

    2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

  10. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Configuring a VPLS PW Trace Test

A VPLS PW trace test can be used to check the pseudo wire (PW) connectivity and measure the packet loss rate, delay, and other indicators of a VPLS network. It can also be used to check the forwarding path of test packets.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set the test instance type to VPLS PW trace.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type vplspwtrace command to set the test instance type to VPLS PW trace.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Set parameters for the VPLS network to be tested. Specifically:
    1. Run the vc-type ldp command to set the VC protocol type to LDP.
    2. Run the vsi vsi-name command to configure the name of a VSI to be tested.
    3. Run the destination-address ipv4 destAddress command to configure an IP address for the remote PE.
  5. (Optional) Configure information about the remote PE when a multi-segment PW is to be tested.

    An MS-PW can be monitored only after you specify control-word.

    1. Run the label-type { label-alert | control-word } command to configure a packet encapsulation type.
    2. (Optional) Run the local-pw-id pwIdValue command to configure a PW ID for the local PE.

      If the VSI configured using the vsi vsi-name command has a specified negotiation-vc-id, the local-pw-id pwIdValue command must be run.

    3. (Optional) Run the ttl-copymode { pipe | uniform } command to specify a TTL propagation mode.
  6. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { level-control-channel | no-reply | udp } command to configure a reply mode for the NQA test instance.
    3. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    4. Run the tracert-livetime first-ttl first-ttl max-ttl max-ttl command to configure the lifetime of the NQA test instance.
  7. (Optional) Configure test failure conditions.
    1. Run the timeout time command to configure a timeout period for response packets.

      If no response packets are received within the timeout period, the probe fails.

    2. Run the tracert-hopfailtimes hopfailtimesValue command to set the maximum number of hop failures in a probe for the test instance.
  8. (Optional) Configure NQA statistics collection.

    Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Configure the device to send trap messages.

    1. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.

    2. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.

    3. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    4. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.

  10. (Optional) Run lsp-path full-display

    All devices, including Ps along an LSP, are displayed in the NQA test result.

  11. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  12. Run commit

    The configuration is committed.

Configuring a VPLS MAC Ping Test

A VPLS MAC ping test can be used to check the connectivity of Layer 2 forwarding links on a VPLS network.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Create an NQA test instance and set its type to VPLS MAC ping.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type vplsping command to set the test instance type to VPLS MAC ping.
    3. (Optional) Run the description description command to configure a description for the test instance.
  3. (Optional) Run fragment enable

    MPLS packet fragmentation is enabled for the NQA test instance.

  4. Set the VPLS network parameters to be tested. Specifically:
    1. Run the vsi vsi-name command to configure the name of a VSI to be tested.
    2. Run the destination-address mac macAddress command to configure the MAC address associated with the VSI.
    3. (Optional) Run the vlan vlan-id command to configure a VLAN ID for the NQA test instance.
  5. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the lsp-exp exp command to configure an LSP EXP value for the NQA test instance.
    2. Run the lsp-replymode { no-reply | udp | udp-via-vpls } command to configure a reply mode for the NQA test instance.
    3. Run the datafill fill-string command to configure the padding characters to be filled into test packets.
    4. Run the datasize datasizeValue command to set the size of the Data field in an NQA test packet.
    5. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    6. Run the interval seconds interval command to set the interval at which NQA test packets are sent.
    7. Run the ttl ttlValue command to configure a TTL value for NQA test packets.
  6. (Optional) Configure test failure conditions and enable the function to send traps to the NMS upon test failures.
    1. Run the timeout time command to configure a timeout period for response packets.
    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.
    3. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.
    4. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.
    5. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    6. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
  7. (Optional) Configure the NQA statistics function.

    Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  8. (Optional) Run agetime ageTimeValue

    The aging time of the NQA test instance is set.

  9. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start the NQA test instance.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  10. Run commit

    The configuration is committed.

Verifying the NQA Configuration

After the test is complete, verify the test results.

Prerequisites

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ collection ] [ test-instance adminName testName ] command to check NQA test results.

Configuring NQA to Monitor a Layer 2 Network

Before configuring NQA to monitor a Layer 2 network, familiarize yourself with the usage scenario of each test instance and complete the pre-configuration tasks.

Usage Scenario

Table 18-36 Usage scenario for monitoring a Layer 2 network using NQA

Test Type

Usage Scenario

MAC ping test

A MAC ping test can be used to check the connectivity and measure the packet loss, delay, and other indicators of the link between any two devices on a Layer 2 network, facilitating fault locating.

Pre-configuration Tasks

Before configuring NQA to check a Layer 2 network, complete basic configurations of the Layer 2 network.

Configuring a MAC Ping Test

A MAC ping test can be used to check the connectivity and measure the packet loss, delay, and other indicators of the link between any two devices on a Layer 2 network, facilitating fault locating.

Procedure

  1. Run system-view

    The system view is displayed.

  2. (Optional) Run cfm { lbm | ltm | gmac-ltm } receive disable

    The device is disabled from receiving loopback messages (LBMs) or linktrace messages (LTMs).

  3. Create an NQA test instance and set its type to MAC ping.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type macping command to set the test instance type to MAC ping.
    3. (Optional) Run the description description command to configure a description for the test instance.
  4. Configure the MEP ID, MD name, and MA name based on the MAC ping type.
    1. Run the mep mep-id mep-id command to configure a local MEP ID.
    2. Run the md md-name ma ma-name command to specify the names of the MD and MA for sending NQA test packets.
  5. Perform either of the following steps to configure a destination address for the MAC ping test:

    • Run the destination-address mac macAddress command to configure a destination MAC address.

      You can run the display cfm remote-mep command to query the destination MAC address.

    • Run the destination-address remote-mep mep-id remoteMepID command to configure an RMEP ID.

      If the destination address type is remote-mep, you must configure the mapping between the remote MEP and MAC address first.

  6. (Optional) Set optional parameters for the test instance and simulate real service flows.
    1. Run the datasize datasizeValue command to configure the size of the Data field in an NQA test packet.
    2. Run the probe-count number command to set the number of probes in a test for the NQA test instance.
    3. Run the interval seconds interval command to set the interval at which NQA test packets are sent.
  7. (Optional) Configure test failure conditions and enable the function to send traps to the NMS upon test failures.
    1. Run the timeout time command to configure a timeout period for response packets.
    2. Run the fail-percent percent command to configure a failure percentage for the NQA test instance.
    3. Run the probe-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive probe failures in an NQA test reaches the specified threshold.
    4. Run the test-failtimes failTimes command to configure the device to send trap messages to the NMS after the number of consecutive NQA test failures reaches the specified threshold.
    5. Run the threshold rtd thresholdRtd command to configure a round-trip delay (RTD) threshold.
    6. Run the send-trap { all | { rtd | testfailure | probefailure | testcomplete | testresult-change }* } command to configure conditions for sending trap messages.
    7. Run the jitter-packetnum packetNum command to configure the number of test packets to be sent in each test.
  8. (Optional) Configure NQA statistics collection.

    Run records { history number | result number }

    The maximum numbers of historical records and test results that can be saved for the NQA test instance are configured.

  9. (Optional) Run agetime ageTimeValue

    The aging time of the NQA test instance is set.

  10. Schedule the NQA test instance.
    1. (Optional) Run the frequency frequencyValue command to configure the interval at which the NQA test instance is automatically executed.
    2. Run the start command to start an NQA test.

      The start command has multiple formats. Choose one of the following as needed.

      • To start an NQA test instance immediately, run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance after a specified delay, run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.

      • To start an NQA test instance at a specified time every day, run the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ] command.

  11. Run commit

    The configuration is committed.

Verifying the NQA Configuration

After the test is complete, verify the test results.

Prerequisites

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ collection ] [ test-instance adminName testName ] command to check NQA test results.
  • Run the display nqa results [ collection ] this command to view NQA test results in a specified NQA test instance view.
  • Run the display nqa history [ test-instance adminName testName ] command to check historical NQA test records.
  • Run the display nqa history [ this ] command to check historical statistics on NQA tests in a specified NQA test instance view.

Configuring an NQA Test Group

This section describes how to configure an NQA test group to monitor the status of multiple links by binding it to multiple NQA test instances.

Prerequisites

NQA test instances have been configured. Currently, an NQA test group can be bound to only ICMP and TCP test instances.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa group group-name

    An NQA test group is created, and the NQA group view is displayed.

  3. Run operator { and | or }

    The operation type between test instances in the NQA test group is set to AND or OR.

    By default, the operation type between test instances is OR (or).

  4. (Optional) Run description string

    A description is configured for the NQA test group.

  5. Run nqa test-instance admin-name test-name

    The NQA test group is bound to a test instance.

  6. Run commit

    The configuration is committed.

Result

Run the display nqa group [ group-name ] command to check the test results of the NQA test group.

Follow-up Procedure

After the preceding configurations are complete, the test results of the NQA test group can be reported to the static route module to control the advertisement of static routes by Configuring NQA Group for IPv4 Static Route or Configuring NQA Group for IPv6 Static Route.

Configuring an RFC 2544 Generalflow Test Instance

This section describes how to configure a generalflow test instance to monitor the performance of interconnected network devices.

Context

An NQA generalflow test is a standard traffic testing method for evaluating network performance and is in compliance with RFC 2544. This test can be used in various networking scenarios that have different packet formats. It is a standard method for evaluating network performance implemented based on UDP.

Before a network service cutover, an NQA generalflow test helps customers evaluate whether network performance meets pre-designed performance requirements. An NQA generalflow test has the following advantages:

  • Enables a device to send simulated service packets to itself before services are deployed on the device.

    Existing methods, unlike generalflow tests, can only be used when services have been deployed on networks. If no services are deployed, testers must be used to send and receive test packets.

  • Uses standard methods and procedures that comply with RFC 2544, facilitating network performance comparison between different vendors.

A generalflow test measures the following performance indicators:

  • Throughput: maximum rate at which packets are sent without loss.
  • Packet loss rate: percentage of discarded packets among all sent packets.

  • Delay: difference between the time when a device sends a packet and the time when the packet is looped back to the device. The delay includes the period during which a forwarding device processes the packet.

A generalflow test can be used in the following scenarios:

  • Layer 2: native Ethernet, EVPN, and L2VPN scenarios

    On the network shown in Figure 18-95, an initiator and a reflector perform a generalflow test to measure the performance of end-to-end services between two user-to-network interfaces (UNIs).

    Figure 18-95 Generalflow test in a Layer 2 scenario

  • Layer 3: native IP and L3VPN scenarios

    Layer 3 networking is similar to Layer 2 networking.

  • L2VPN accessing L3VPN: VLL accessing L3VPN scenario

    Figure 18-96 Generalflow test in an L2VPN accessing L3VPN scenario

    In the L2VPN accessing L3VPN networking shown in Figure 18-96, the initiator and reflector can reside in different locations to represent different scenarios.
    • If the initiator and reflector reside in locations 1 and 5 (or 5 and 1), respectively, or the initiator and reflector reside in locations 4 and 6 (or 6 and 4), respectively, it is a native Ethernet scenario.

    • If the initiator and reflector reside in locations 2 and 3 (or 3 and 2), respectively, it is a native IP scenario.

    • If the initiator resides in location 3 and the reflector in location 1, or the initiator resides in location 2 and the reflector in location 4, it is similar to an IP gateway scenario, and the simulated IP address must be configured on the L2VPN device.

    • If the initiator and reflector reside in locations 1 and 2 (or 2 and 1), respectively, or the initiator and reflector reside in locations 3 and 4 (or 4 and 3), respectively, it is an IP gateway scenario.

    • If the initiator resides in location 1 and the reflector in location 4, the initiator resides in location 1 and the reflector in location 3, or the initiator resides in location 4 and the reflector in location 2, it is an L2VPN accessing L3VPN scenario. In this scenario, the destination IP and MAC addresses and the source IP address must be specified on the initiator, and the destination IP address for receiving test flows must be specified on the reflector. If the initiator resides on the L2VPN, the simulated IP address must be specified as the source IP address.

  • IP gateway scenario:

    On the network shown in Figure 18-97, user CE side's Layer 3 services are transmitted to the IP gateway through a Layer 2 network.

    Figure 18-97 Generalflow test in Layer 2 access to Layer 3 networking

Pre-configuration Tasks

Before configuring an NQA generalflow test, complete the following tasks:

  • Layer 2:
    • In a native Ethernet scenario, configure reachable Layer 2 links between the initiator and reflector.
    • In an L2VPN scenario, configure reachable links between CEs on both ends of an L2VPN connection.
    • In an EVPN scenario, configure reachable links between CEs on both ends of an EVPN connection.
  • Layer 3:
    • In a native IP scenario, configure reachable IP links between the initiator and reflector.
    • In an L3VPN scenario, configure reachable links between CEs on both ends of an L3VPN connection.
  • L2VPN accessing L3VPN scenario: configure reachable links between the L2VPN and L3VPN.
  • IP gateway scenario: configure reachable Layer 2 links between an IP gateway and the reflector.

Configuring a Reflector

This section describes how to configure a reflector, which loops traffic to an initiator. You can set reflector parameters based on each scenario.

Context

On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", the following two roles are involved in a generalflow test:

  • Initiator: sends simulated service traffic to a reflector.
  • Reflector: loops the service traffic to the initiator.

The reflector can reflect all packets on a reflector interface or the packets matching filter criteria to the initiator. The filter criteria include a destination unicast MAC address or a port number.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Configure the reflector. The reflector settings vary according to usage scenarios.

    • The reflector ID must be unique on a local node.
    • The aging time can be set for a reflector.

    Usage Scenario

    Configuration Procedure

    Configuration Note

    Any scenario in which a reflector loops all packets

    nqa reflector reflector-id interface interface-type interface-number [ exclusive ] [ exchange-port ]

    On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", UNI-B is used as a reflector interface.

    Layer 2

    nqa reflector reflector-id interface interface-type interface-number [ mac mac-address ] [ pe-vid pe-vid ce-vid ce-vid | vlan vlan-id ] [ source-port source-port ] [ destination-port destination-port ] [ exchange-port ] [ agetime agetime | endtime { endtime | enddate endtime } ] [ share-mode ]

    On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", the MAC address of the reflector's UNI-B or a MAC address that has never been used is used as the MAC address.

    Layer 3

    nqa reflector reflector-id interface interface-type interface-number [ ipv4 ip-address ] [ pe-vid pe-vid ce-vid ce-vid | vlan vlan-id ] [ source-port source-port ] [ destination-port destination-port ] [ exchange-port ] [ agetime agetime | endtime { endtime | enddate endtime } ] [ share-mode ]

    NOTE:
    • In a Layer 3 scenario, static ARP must be configured first using the arp static ip-address mac-address command.
    • In a Layer 3 scenario, the outbound interface must be specified when static ARP is configured on the reflector.

    On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", an IP address on the same network segment as the reflector's UNI-B is used as the IP address.

    IP and IP gateway

    nqa reflector reflector-id interface interface-type interface-number [ ipv4 ip-address | mac mac-address | simulate-ip ipv4 ip-address2 ] [ pe-vid pe-vid ce-vid ce-vid | vlan vlan-id ] [ source-port source-port ] [ destination-port destination-port ] [ exchange-port ] [ agetime agetime | endtime { endtime | enddate endtime } ] [ share-mode ]

    NOTE:

    In the IP and IP gateway scenario, need to run the arp static ip-address mac-address command to configure a static ARP entry for the source IP address.

    On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", an IP address on the same network segment as the reflector's UNI-B is used as the simulated IP address.

  3. Run commit

    The configuration is committed.

Configuring an Initiator

This section describes how to configure an initiator that sends simulated service traffic. You can set initiator parameters based on usage scenarios and test indicator types.

Context

On the network shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance", the following two roles are involved in a generalflow test:

  • Initiator: sends simulated service traffic to a reflector.
  • Reflector: loops the service traffic back to the initiator.

The process of configuring the initiator is as follows:

  1. Create a generalflow test instance.

  2. Configure basic information about simulated service traffic based on test scenarios.

  3. Set key test parameters based on indicators.

  4. Set generalflow test parameters.

  5. Start the generalflow test instance.

Procedure

  1. Create a generalflow test instance.
    1. Run the system-view command to enter the system view.
    2. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    3. Run the test-type generalflow command to set the test instance type to generalflow.
    4. Run the measure { throughput | loss | delay } command to configure the measurement indicator type.
  2. Set basic simulated service parameters.

    The basic simulated service parameters on the initiator must be the same as those configured on the reflector.

    Usage Scenario

    Configuration Procedure

    Configuration Note

    Layer 2

    1. Run the destination-address mac macAddress command to specify the destination MAC address of test packets.

    2. (Optional) Run the source-address mac mac-address command to specify the source MAC address of test packets.

    3. Run the forwarding-simulation inbound-interface interface-type interface-number [ share-mode ]command to specify the inbound interface of simulated service packets.

      NOTE:

      If share-mode is configured, both test flows and non-test flows can pass through; otherwise, only test flows can pass through.

    4. Run the vlan vlan-id or pe-vid pe-vid ce-vid ce-vid command to configure VLAN IDs for simulated service packets.

    The initiator shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance" has the following parameters:

    • Destination MAC address: MAC address of the reflector's UNI-B or a MAC address that has never been used

    • Source MAC address: MAC address of the initiator's UNI-A (simulated inbound interface) or a MAC address that has never been used
    • Simulated inbound interface: UNI-A

    • VLAN ID: VLAN IDs configured on interfaces

    NOTE:

    You can run the display nqa reflector command on the reflector to view the recommended destination MAC address.

    Layer 3

    1. Run the destination-address ipv4 destAddress command to specify the destination IP address of test packets.

    2. Run the source-address ipv4 srcAddress command to specify the source IP address of test packets.

    3. Run the forwarding-simulation inbound-interface interface-type interface-number command to specify the inbound interface of simulated service packets.

    4. (Optional) Run the vlan vlan-id or pe-vid pe-vid ce-vid ce-vid command to configure VLAN IDs for simulated service packets.

    NOTE:
    • If the initiator does not have an ARP entry corresponding to the source IP address in test packets, run the arp static ip-address mac-address command to configure a static ARP entry for the source IP address.
    • In a Layer 3 scenario, the outbound interface must be specified when static ARP is configured on the initiator.

    The initiator shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance" has the following parameters:

    • Destination IP address: an IP address on the same network segment as the reflector's UNI-B

    • Source IP address: an IP address on the same network segment as UNI-A's IP address
    • Simulated inbound interface: UNI-A

    IP and IP gateway

    1. Run the destination-address ipv4 destAddress command to specify the destination IP address of test packets.

    2. Run the source-address ipv4 srcAddress command to specify the source IP address of test packets.

    3. Run the source-interface ifType ifNum command to specify the outbound interface of test packets.

    4. (Optional) Run the vlan vlan-id or pe-vid pe-vid ce-vid ce-vid command to configure VLAN IDs for simulated service packets.

    The initiator shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance" has the following parameters:

    • Destination IP address: CE's IP address or an IP address on the same network segment as the CE

    • Source IP address: an IP address on the same network segment as UNI-A's IP address

    L2VPN accessing L3VPN

    1. Run the destination-address ipv4 destAddress mac macAddress command to specify the destination IP and MAC addresses of test packets.

    2. Run the source-address ipv4 srcAddress command to specify the source IP address of test packets.

    3. Run the forwarding-simulation inbound-interface interface-type interface-number command to specify the inbound interface of simulated service packets.

    The initiator shown in Figure 18-95 of "Configuring an RFC 2544 Generalflow Test Instance" has the following parameters:

    • Destination IP address: CE's IP address or an IP address on the same network segment as the CE

    • Destination MAC address: MAC address of the Layer 3 gateway interface

    • Source IP address: an IP address on the same network segment as UNI-A's IP address
    • Simulated inbound interface: UNI-A

  3. Set key test parameters based on indicators.

    Indicator

    Configuration Procedure

    Throughput

    1. Run the rate rateL rateH command to configure upper and lower rate thresholds.
    2. Run the interval seconds interval command to set the interval at which test packets are transmitted at a specific rate.

    3. Run the precision precision-value command to set the throughput precision.

    4. Run the fail-ratio fail-ratio-value command to set the packet loss rate during a throughput test. The value is expressed in 1/10000. If the actual packet loss rate is less than 1/10000, the test is successful and continues.

    Delay

    1. Run the rate rateL command to set the rate at which test packets are sent.
    2. Run the interval seconds interval command to set the interval at which test packets are sent.

    Packet loss rate

    1. Run the rate rateL command to set the rate at which test packets are sent.

  4. Configure common parameters for the test instance.
    1. Run the datasize datasizeValue & <1-7> command to set the size of the Data field in an NQA test packet.

      In Layer 2 and Layer 3 scenarios, the datasizeValue value cannot be greater than the maximum MTU of the simulated inbound interface.

    2. Run the duration duration command to set the duration of the test instance.
    3. Run the records result number command to configure the maximum number of records in the result table.
    4. Run the priority 8021p priority-value command to configure an 802.1p priority value for generalflow test packets in an Ethernet scenario.
    5. Run the tos tos-value command to configure a ToS value for test packets.
    6. (Optional) Run the exchange-port enable command to enable the switching between the UDP source port number and UDP destination port number.
  5. Run start now

    The NQA test is started.

    Currently, an RFC 2544 generalflow test can be started only by running the start now command. However, user services will be interrupted during the test.

  6. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring the generalflow test, you can view the generalflow test results.

Prerequisites

All generalflow test configurations are complete.

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ test-instance adminName testName ] command on the initiator to check the test results.
  • Run the display nqa reflector [ reflector-id ] command on the reflector to view reflector information.

Configuring a Y.1564 Ethernet Service Activation Test

This section describes how to configure an Ethernet service activation test. This test is conducted before network services are activated and delivered to users. It checks whether configurations are correct and network performance meets SLA requirements.

Context

An Ethernet service activation test is a method defined in Y.1564. This test helps carriers rapidly and accurately verify whether network performance meets SLA requirements before service rollout.

Pre-configuration Tasks

Before configuring an Ethernet service activation test, complete the following tasks:
  • Layer 2 scenarios:
    • In a native Ethernet scenario, configure reachable Layer 2 links between the initiator and reflector.
    • In an L2VPN/EVPN L2VPN scenario, configure reachable links between CEs on both ends of an L2VPN/EVPN L2VPN connection.
    • In an EVPN VXLAN scenario, configure reachable links between devices on both ends of an EVPN VXLAN connection.
    • In an HVPN scenario, configure reachable links between CEs on both ends of an HVPN connection.
  • Layer 3 scenarios:
    • In a native IP scenario, configure reachable IP links between the initiator and reflector.
    • In an L3VPN/EVPN L3VPN scenario, configure reachable links between CEs on both ends of an L3VPN/EVPN L3VPN connection.
    • In an EVPN VXLAN scenario, configure reachable links between devices on both ends of an EVPN VXLAN connection.
  • In an L2VPN+L3VPN scenario, configure reachable links between the L2VPN and L3VPN.

Configuring a Reflector

This section describes how to configure a reflector, which loops traffic back to an initiator. Parameters configured on the reflector vary according to scenarios.

Context

Devices performing an Ethernet service activation test play two roles: initiator and reflector. An initiator sends simulated service traffic to a reflector, and the reflector reflects the service traffic.

The reflector must be specified before a test is started.

A reflector can reflect test traffic in either of the following modes:

  • Interface-based mode: A reflector reflects all traffic that its interface receives.
  • Flow-based mode: A reflector reflects only traffic meeting specified conditions. In flow-based mode, a test flow must have been configured.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-flow flow-id

    A test flow is configured, and the NQA test flow view is displayed.

  3. Configure service flow characteristics.

    1. Run either of the following commands to configure a traffic type and specify a MAC/IP address (or range) for test packets:
      • traffic-type mac { destination destination-mac [ end-destination-mac ] | source source-mac [ end-source-mac ] }
      • traffic-type ipv4 { destination destination-ip [ end-destination-ip ] | source source-ip [ end-source-ip ] }

      Services may be forwarded based on the MAC or IP address or using MPLS based on the service type, such as Ethernet, IP, L2VPN, L2VPN accessing L3VPN, and L3VPN. When testing a service network, you must determine the specific service forwarding mode before configuring a traffic type for the service flow to be tested.

      • For Ethernet Layer 2 switching and L2VPN services, a MAC address must be specified, and an IP address is optional.
      • For IP routing and L3VPN services, an IP address and a MAC address must be specified. If no IP address or MAC address is specified, the reflector will reflect all the traffic, affecting other service functions.
      • For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
    2. Configure the following parameters as needed:
      • In the NQA test flow view, run the vlan vlan-id [ end-vlan-vid ] command to specify a single VLAN ID for Ethernet packets.
      • In the NQA test flow view, run the pe-vid pe-vid ce-vid ce-vid [ ce-vid-end ] command to specify double VLAN IDs for Ethernet packets.
      • Run the udp destination-port destination-port [ end-destination-port ] command to specify a destination UDP port number or range.
      • Run the udp source-port source-port [ end-source-port ] command to specify a source UDP port number or range.
        • For the same test flow, a range can be specified only in one of the traffic-type, vlan, pe-vid, udp destination-port, and udp source-port commands. In addition, the difference between the start and end values cannot be more than 127, and the end value must be greater than the start value.

        • In the traffic-type command, the start MAC or IP address can have only one different octet from the end MAC or IP address. For example, the start IP address is set to 1.1.1.1, and the end IP address can only be set to an IP address on the network segment 1.1.1.0.

  4. Run quit

    Return to the system view.

  5. Run nqa reflector reflector-id interface { interface-name | interface-type interface-number } [ [ [ test-flow { testFlowId } &<1-16> ] | [ [ ipv4 ip-address | simulate-ipipv4 ip-address2 | mac mac-address ] [ pe-vid pe-vid ce-vid ce-vid | vlan vlan-id ] [ source-port source-port ] [ destination-port destination-port ] ] ] | exclusive ] [ exchange-port ] [ agetime agetime | endtime { endtime | enddateendtime } ]

    A reflector is specified.

    If the test-flow flow-id & <1-16> parameter is configured, the reflector reflects traffic based on a specified flow. Otherwise, the reflector reflects all traffic that its interface receives. The agetime age-time parameter is optional, and the default value is 14400s.

  6. Run commit

    The configuration is committed.

Configuring an Initiator

This section describes how to configure an initiator that sends simulated service traffic. You can set initiator parameters based on usage scenarios and test indicator types.

Context

Devices performing an Ethernet service activation test play two roles: initiator and reflector. An initiator sends simulated service traffic to a reflector, and the reflector reflects the service traffic.

The process of configuring an initiator is as follows:

  1. Configure a test flow.

  2. Configure a test instance.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-flow flow-id

    A test flow is configured, and the NQA test flow view is displayed.

  3. Specify service flow characteristics.

    1. Run either of the following commands to configure a traffic type and specify a MAC/IP address (or range) for test packets:
      • traffic-type mac { destination destination-mac [ end-destination-mac ] | source source-mac [ end-source-mac ] }
      • traffic-type ipv4 { destination destination-ip [ end-destination-ip ] | source source-ip [ end-source-ip ] }

      Services may be forwarded based on the MAC or IP address or using MPLS based on the service type, such as Ethernet, IP, L2VPN, L2VPN accessing L3VPN, and L3VPN. When testing a service network, you must determine the specific service forwarding mode before configuring a traffic type for the service flow to be tested.

      • For Ethernet Layer 2 switching and L2VPN services, a MAC address must be specified, and an IP address is optional.
      • For IP routing and L3VPN services, an IP address and a MAC address must be specified. If no IP address or MAC address is specified, the reflector will reflect all the traffic, affecting other service functions.
      • For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
    2. Configure the following parameters as needed:
      • In the NQA test flow view, run the vlan vlan-id [ end-vlan-vid ] command to specify a single VLAN ID for Ethernet packets.
      • In the NQA test flow view, run the pe-vid pe-vid ce-vid ce-vid [ ce-vid-end ] command to specify double VLAN IDs for Ethernet packets.
      • Run the udp destination-port destination-port [ end-destination-port ] command to configure a destination UDP port number or range.
      • Run the udp source-port source-port [ end-source-port ] command to configure a source UDP port number or range.
      • For the same test flow, a range can be specified only in one of the traffic-type, vlan, pe-vid, udp destination-port, and udp source-port commands. In addition, the difference between the start and end values cannot be more than 127, and the end value must be greater than the start value.

      • In the traffic-type command, the start MAC or IP address can have only one different octet from the end MAC or IP address. For example, the start IP address is set to 1.1.1.1, and the end IP address can only be set to an IP address on the network segment 1.1.1.0.

  4. Run bandwidth cir cir-value [ eir eir-value ]

    A bandwidth profile is configured for the NQA test flow, including service parameters such as the CIR.

    The default excess information rate (EIR) is 0 kbit/s. If the eir eir-value parameter is not configured, the EIR is not tested

  5. Run sac flr flr-value ftd ftd-value fdv fdv-value

    The frame loss ratio (FLR) is specified, expressed in 1/100000, the frame transfer delay (FTD) is specified, expressed in µs, and the frame delay variation (FDV) is specified, expressed in µs.

  6. Enable the simple CIR test, enable traffic policing test, configure the color mode, and configure a test flow description as needed:

    1. Run the cir simple-test enable command to enable the simple CIR test.

    2. Run the traffic-policing test enable command to enable the traffic policing test.

    3. Run the color-mode { 8021p green begin-8021p-value [ end-8021p-value ] yellow begin-8021p-value [ end-8021p-value ] | dscp green begin-dscp-value [ end-dscp-value ] yellow begin-dscp-value [ end-dscp-value ]}* command to enable the color mode and configure the mapping between packet priorities and colors.

    4. Run the description description command to configure a description for the test flow.

  7. Configure an Ethernet service activation test.
    1. Run the nqa test-instance admin-name test-name command to create an NQA test instance and enter its view.
    2. Run the test-type ethernet-service command to set the test instance type to Ethernet service activation.
    3. Run the test-flow flow-id <1-16> command to configure a referenced test flow. Multiple test flows can be configured.
    4. Run the forwarding-simulation inbound-interface { ifName | ifType ifNum } command to configure a simulated inbound interface.
    5. (Optional) Run the packet-size packet-size <1-10> command to configure one or multiple sizes for test packets.
    6. (Optional) Run the duration { configuration-test configuration-test-time | performance-test performance-test-time } command to set the durations of configuration and performance tests.
    7. Run the start now command to start the test.
  8. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring the Ethernet service activation test, you can check the test results.

Prerequisites

All configurations related to the Ethernet service activation test are complete.

NQA test results are not displayed automatically on the terminal. To view test results, run the display nqa results command.

Procedure

  • Run the display nqa results [ test-instance adminName testName ] command on the initiator to check the test results.
  • Run the display nqa reflector [ reflector-id ] command on the reflector to view reflector information.

Configuring the Device to Send Test Results to the FTP/SFTP Server

The NMS needs to obtain test results of devices. If the NMS cannot poll test results in time, test results are lost. Delivering the test results to the FTP/SFTP server can save the test results to the maximum extent.

Usage Scenario

The result table of NQA test instances records results of each test type. A maximum of 5000 test result records are supported in total. If the number of records reaches 5000, test results are uploaded, and the new test result overwrites the earliest one. If the NMS cannot poll test results in time, test results are lost. You can configure the device to send test results to the FTP/SFTP server through FTP/SFTP when their number reaches the maximum capacity of local storage or periodically. This effectively prevents the loss of test results and facilitates network management based on the analysis of test results at different times.

Pre-configuration Tasks

Before configuring the device to send test results to the FTP/SFTP server, complete the following tasks:

  • Configure the FTP/SFTP server.

  • Configure a reachable route between the NQA client and the NMS.

  • Configure a test instance.

Data Preparation

Before configuring the device to send test results to the FTP/SFTP server, you need the following data.

No.

Data

1

IP address of the FTP/SFTP server

2

Username and password used for logging in to the FTP/SFTP server

3

Name of a file in which test results are saved through FTP/SFTP

4

Interval at which test results are uploaded through FTP

Setting Parameters for Configuring the Device to Send Test Results to the FTP/SFTP Server

Before starting a test instance, set the IP address of the FTP/SFTP server that receives test results, username and password for logging in to the FTP/SFTP server, name of the file in which test results are saved, interval at which test results are uploaded, and number of retransmissions.

Context

Perform the following operations on the NQA client.

FTP is not secure, and SFTP is recommended.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa upload test-type { icmp | icmpjitter | jitter | udp } { ftp | sftp } ipv4 ipv4-address file-name file-name [ vpn-instance vpn-instance-name ] [ port port-number ] username user-name password password [ interval upload-interval ] [ retry retry-times ]

    The function to upload the result of a test instance of a specified type to a specified server is enabled.

  3. Run commit

    The configuration is committed.

Starting a Test Instance

After a test instance is started, test results are periodically recorded in files.

Context

Perform the following operations on the NQA client.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    The NQA view is displayed.

  3. Run test-type { icmp | icmpjitter | jitter | udp }

    A test instance type is set.

  4. Run destination-address ipv4 destAddress

    A destination address is configured.

  5. (Optional) Run destination-port port-number

    A destination port number is configured.

  6. Run start

    An NQA test instance is started.

    An NQA test instance can be started immediately, at a specified time, or after a specified delay.

    • Run start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]

      The test instance is started immediately.

    • Run start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]

      The test instance is started at a specified time.

    • Run start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]

      The test instance is started after a specified delay.

    • Run start daily hh:mm:ss to hh:mm:ss [ begin { yyyy/mm/dd | yyyy-mm-dd } ] [ begin { yyyy/mm/dd | yyyy-mm-dd } ]

      The test instance is started at a specified time every day.

  7. Run commit

    The configuration is committed.

Verifying the Configuration

After configuring the device to send test results to the FTP/SFTP server, you can view the related configuration.

Prerequisites

All configurations that enable the device to send test results to the FTP/SFTP server are complete.

Procedure

  1. Run the display nqa upload file-info command to view information about the files that are being uploaded and the files that have been uploaded.

Maintaining NQA

Maintaining NQA involves stopping or restarting NQA test instances, deleting statistics, and deleting test records.

Checking Test and Server Types Supported by the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M

You can run display commands to check the test and server types supported by the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M.

Procedure

  • Run the display nqa support-test-type command to check the supported test types.
  • Run the display nqa support-server-type command to check the supported server types.

Stopping a Test Instance

This section describes how to stop a test instance when its test parameters need to be changed.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    An NQA test instance is created, and the test instance view is displayed.

  3. Run stop

    The NQA test instance is stopped.

  4. Run commit

    The configuration is committed.

Restarting an NQA Test Instance

This section describes how to restart (namely, to stop and then immediately start) an NQA test instance.

Context

Restarting an NQA test instance terminates the running test instance.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    An NQA test instance is created, and the test instance view is displayed.

  3. Run restart

    The NQA test instance is restarted.

  4. Run commit

    The configuration is committed.

Deleting Test Records

This section describes how to delete test records before the next test is conducted.

Prerequisites

The NQA test instance has been stopped.

Context

Test records cannot be restored after being deleted. Exercise caution when running the clear-records command.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run nqa test-instance admin-name test-name

    An NQA test instance is created, and the test instance view is displayed.

  3. Run clear-records

    Historical records and result records of the NQA test instance are deleted.

  4. Run commit

    The configuration is committed.

Configuration Examples for NQA

This section describes several NQA configuration examples.

Example for Configuring an NQA Test to Measure the DNS Resolution Speed on an IP Network

This section provides an example for configuring an NQA test to measure the performance of interaction between a client and the DNS server.

Networking Requirements

On the network shown in Figure 18-98, DeviceA needs to access HostA using the domain name Server.com. A DNS test instance can be configured on DeviceA to measure the performance of interaction between DeviceA and the DNS server.

Figure 18-98 Measuring the DNS resolution speed

In this example, interface1 represents GE0/1/0.



Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure reachable routes between DeviceA, the DNS server, and HostA at the network layer.

  2. Configure a DNS test instance on DeviceA and start the test instance to measure the DNS resolution speed on the IP network.

Data Preparation

To complete the configuration, you need the following data:

  • IP address of the DNS server

  • Domain name and IP address of HostA

Procedure

  1. Configure reachable routes between DeviceA, the DNS server, and HostA at the network layer. (Omitted)
  2. Configure a DNS test instance and start it.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] dns resolve
    [*DeviceA] dns server 10.3.1.1
    [*DeviceA] dns server source-ip 10.1.1.1
    [*DeviceA] nqa test-instance admin dns
    [*DeviceA-nqa-admin-dns] test-type dns
    [*DeviceA-nqa-admin-dns] dns-server ipv4 10.3.1.1
    [*DeviceA-nqa-admin-dns] destination-address url Server.com
    [*DeviceA-nqa-admin-dns] commit
    [~DeviceA-nqa-admin-dns] start now
    [*DeviceA-nqa-admin-dns] commit

  3. Verify the test result. Min/Max/Average Completion Time indicates the delay between the time when a DNS request packet is sent and the time when a DNS response packet is received. In this example, the delay is 208 ms.

    [~DeviceA-nqa-admin-dns] display nqa results test-instance admin dns
    NQA entry(admin, dns) :testflag is inactive ,testtype is dns
      1 . Test 1 result   The test is finished
       Send operation times: 1                Receive response times: 1             
       Completion:success                     RTD OverThresholds number:0           
       Attempts number:1                      Drop operation number:0               
       Disconnect operation number:0          Operation timeout number:0            
       System busy operation number:0         Connection fail number:0              
       Operation sequence errors number:0     RTT Status errors number:0            
       Destination ip address:10.3.1.1
       Min/Max/Average Completion Time: 208/208/208
       Sum/Square-Sum  Completion Time: 208/43264
       Last Good Probe Time: 2018-01-25 09:18:22.6
       Lost packet ratio: 0 %

Configuration Files

DeviceA configuration file

#
sysname DeviceA
#
dns resolve
dns server 10.3.1.1
dns server source-ip 10.1.1.1
#
nqa test-instance admin dns
 test-type dns
 destination-address url Server.com
 dns-server ipv4 10.3.1.1
#
return

Example for Configuring an NQA ICMP Test to Check IP Network Reachability

This section provides an example for configuring an ICMP test to check network reachability between an NQA client and an NQA server.

Networking Requirements

On the network shown in Figure 18-99, DeviceA serves as an NQA client to check whether DeviceB is reachable.

Figure 18-99 Configuring an NQA ICMP test to check IP network reachability

In this example, interface1 represents GE0/1/0.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceA as the NQA client and DeviceB as the NQA server. Configure an ICMP test instance on DeviceA.

  2. Start a test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceA and DeviceD

Procedure

  1. Configure DeviceA (NQA client). Create an ICMP test instance, and set the destination IP address to the IP address of DeviceB.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] nqa test-instance admin icmp
    [*DeviceA-nqa-admin-icmp] test-type icmp
    [*DeviceA-nqa-admin-icmp] destination-address ipv4 10.1.1.2

  2. Start the test instance immediately.

    [*DeviceA-nqa-admin-icmp] start now
    [*DeviceA-nqa-admin-icmp] commit

  3. Verify the test result. The command output shows that there is a reachable route between the NQA client and server (DeviceB).

    [~DeviceA-nqa-admin-icmp] display nqa results test-instance admin icmp
    NQA entry(admin, icmp) :testflag is inactive ,testtype is icmp
      1 . Test 1 result   The test is finished
       Send operation times: 3                Receive response times: 3             
       Completion:success                     RTD OverThresholds number:0           
       Attempts number:1                      Drop operation number:0               
       Disconnect operation number:0          Operation timeout number:0            
       System busy operation number:0         Connection fail number:0              
       Operation sequence errors number:0     RTT Status errors number:0            
       Destination ip address:10.1.1.2
       Min/Max/Average Completion Time: 31/46/36
       Sum/Square-Sum  Completion Time: 108/4038
       Last Good Probe Time: 2024-4-8 10:7:11.4 
       Lost packet ratio: 0 %

Configuration Files

DeviceA configuration file

#
sysname DeviceA
#
interface GigabitEthernet0/1/0
 undo shutdown
 ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin icmp
 test-type icmp
 destination-address ipv4 10.1.1.2
#
return

Example for Configuring an NQA TCP Test to Measure the Response Time on an IP Network

This section provides an example for configuring an NQA TCP test to measure the response time on an IP network.

Networking Requirements

On the network shown in Figure 18-100, the headquarters and a subsidiary often need to use TCP to exchange files with each other. The time taken to respond to a TCP transmission request must be less than 800 ms. An NQA TCP test can be configured to measure the TCP response time between routerDeviceA and routerDeviceD on the edge of the enterprise network.

Figure 18-100 Configuring an NQA TCP test to measure the response time on an IP network
  • In this example, interface 1 represents GE0/1/0.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceD as the NQA client and DeviceA as the NQA server, and create a TCP test instance.

  2. Configure the client to perform the test at 10:00 o'clock every day and start the test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceA and DeviceD on the edge of the enterprise network

  • Number of the port used to monitor TCP services

Procedure

  1. Configure DeviceA (NQA server).

    <DeviceA> system-view
    [~DeviceA] nqa-server tcpconnect 10.1.1.1 4000
    [*DeviceA] commit

  2. Configure DeviceD (NQA client). Create a TCP test instance, and set the destination IP address to the IP address of DeviceA.

    <DeviceD> system-view
    [*DeviceD] nqa test-instance admin tcp
    [*DeviceD-nqa-admin-tcp] test-type tcp
    [*DeviceD-nqa-admin-tcp] destination-address ipv4 10.1.1.1
    [*DeviceD-nqa-admin-tcp] destination-port 4000
    [*DeviceD-nqa-admin-tcp] commit 

  3. Start the test instance immediately.

    [*DeviceD-nqa-admin-tcp] start now
    [*DeviceD-nqa-admin-tcp] commit

  4. Verify the test result. The command output shows that the TCP response time is less than 800 ms.

    [~DeviceD-nqa-admin-tcp] display nqa results test-instance admin tcp
    
    NQA entry(admin, tcp) :testflag is active ,testtype is tcp 
    1 . Test 1 result   The test is finished
       Send operation times: 3                Receive response times: 3
       Completion:success                     RTD OverThresholds number:0
       Attempts number:1                      Drop operation number:0
       Disconnect operation number:0          Operation timeout number:0
       System busy operation number:0         Connection fail number:0
       Operation sequence errors number:0     RTT Stats errors number:0
       Destination ip address:10.1.1.1
       Min/Max/Average Completion Time: 600/610/603
       Sum/Square-Sum  Completion Time: 1810/1092100
       Last Good Probe Time: 2011-01-16 02:59:41.6
       Lost packet ratio: 0 %                         

  5. Configure the client to perform the test at 10:00 every day.

    [~DeviceD-nqa-admin-tcp] stop
    [*DeviceD-nqa-admin-tcp] start daily 10:00:00 to 10:30:00  
    [*DeviceD-nqa-admin-tcp] commit

Configuration Files

  • DeviceA configuration file

    #
     sysname DeviceA
    #
     nqa-server tcpconnect 10.1.1.1 4000
    #
    isis 1
     network-entity 00.0000.0000.0001.00
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1 
    #
    return
  • DeviceD configuration file

    #
     sysname DeviceD
    #
    isis 1
     network-entity 00.0000.0000.0002.00  
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.2.2.1 255.255.255.0
     isis enable 1 
    #
    nqa test-instance admin tcp
     test-type tcp
     destination-address ipv4 10.1.1.1
     destination-port 4000
     start daily 10:00:00 to 10:30:00     
    #
    return

Example for Configuring an NQA UDP Test to Measure the Communication Speed on an IP Network

This section provides an example for configuring a UDP test to measure the speed of communication through UDP between an NQA client and an NQA server.

Networking Requirements

On the network shown in Figure 18-101, DeviceA functions as an NQA client and sends constructed UDP packets to DeviceB. This example measures the speed of communication through UDP between the source and destination devices.

Figure 18-101 Configuring an NQA UDP test to measure the speed of communication through UDP on an IP network

In this example, interface1 represents GE0/1/0.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceA as the NQA client and DeviceB as the NQA server. Configure a UDP test instance on DeviceA.

  2. Start a test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceA and DeviceB.

Procedure

  1. Configure DeviceB as the NQA server.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceB
    [*HUAWEI] commit
    [~DeviceB] nqa server udpecho 10.1.1.2 4000
    [*DeviceB] commit

  2. Configure DeviceA as the NQA client, create a UDP test instance, and set the destination IP address to the IP address of DeviceB.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] nqa test-instance admin udp
    [*DeviceA-nqa-admin-udp] test-type udp
    [*DeviceA-nqa-admin-udp] destination-address ipv4 10.1.1.2
    [*DeviceA-nqa-admin-udp] destination-port 4000

  3. Start the test instance immediately.

    [*DeviceA-nqa-admin-udp] start now
    [*DeviceA-nqa-admin-udp] commit

  4. Verify the test result. The test result shows the speed of communication through UDP between devices on the network based on fields such as Min/Max/Average Completion Time.

    [~DeviceA-nqa-admin-udp] display nqa results test-instance admin udp
    NQA entry(admin, udp) :testflag is inactive ,testtype is udp
      1 . Test 1 result   The test is finished
       Send operation times: 3                Receive response times: 3             
       Completion:success                     RTD OverThresholds number:0           
       Attempts number:1                      Drop operation number:0               
       Disconnect operation number:0          Operation timeout number:0            
       System busy operation number:0         Connection fail number:0              
       Operation sequence errors number:0     RTT Status errors number:0            
       Destination ip address:10.1.1.2
       Min/Max/Average Completion Time: 2/38/14
       Sum/Square-Sum  Completion Time: 42/1452
       Last Good Probe Time: 2024-02-19 08:44:41.3
       Lost packet ratio: 0 %

Configuration Files

  • DeviceA configuration file
    #
    sysname DeviceA
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    nqa test-instance admin udp
     test-type udp
     destination-address ipv4 10.1.1.2
     destination-port 4000
    #
    return
  • DeviceB configuration file
    #
    sysname DeviceB
    #
    nqa server udpecho 10.1.1.2 4000
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
    #
    return

Example for Configuring an NQA Trace Test to Check the Forwarding Path on an IP Network

This section provides an example for configuring a trace test to check the forwarding path between an NQA client and an NQA server.

Networking Requirements

On the network shown in Figure 18-102, DeviceA serves as an NQA client to check the forwarding path between DeviceA and DeviceC.

Figure 18-102 Configuring an NQA trace test to check the forwarding path on an IP network

In this example, interfaces 1 and 2 represent GE0/1/0 and GE0/2/0, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceA as the NQA client and DeviceC as the NQA server. Configure a trace test instance on DeviceA.

  2. Start a test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceA and DeviceC.

Procedure

  1. Configure DeviceA (NQA client). Create a trace test instance, and set the destination IP address to the IP address of DeviceC.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] nqa test-instance admin trace
    [*DeviceA-nqa-admin-trace] test-type trace
    [*DeviceA-nqa-admin-trace] destination-address ipv4 10.2.1.2

  2. Start the test instance immediately.

    [*DeviceA-nqa-admin-trace] start now
    [*DeviceA-nqa-admin-trace] commit

  3. Verify the test result. You can view information about each hop between DeviceC and the NQA server.

    [~DeviceA-nqa-admin-trace] display nqa results test-instance admin trace
    NQA entry(admin, trace) :testflag is inactive ,testtype is trace
      1 . Test 1 result   The test is finished
       Completion:success                     Attempts number:1
       Disconnect operation number:0          Operation timeout number:0
       System busy operation number:0         Connection fail number:0
       Operation sequence errors number:0     RTT Status errors number:0
       Drop operation number:0
       Last good path Time: 2024-05-06 06:58:15.9
       1 . Hop 1
        Send operation times: 3              Receive response times: 3
        Min/Max/Average Completion Time: 1/4/2
        Sum/Square-Sum  Completion Time: 6/18
        RTD OverThresholds number:0
        Last Good Probe Time: 2024-05-06 06:58:15.9
        Destination ip address:10.1.1.2
        Lost packet ratio: 0 %
       2 . Hop 2
        Send operation times: 3              Receive response times: 3
        Min/Max/Average Completion Time: 1/3/1
        Sum/Square-Sum  Completion Time: 5/11
        RTD OverThresholds number:0
        Last Good Probe Time: 2024-05-06 06:58:15.9
        Destination ip address:10.2.1.2
        Lost packet ratio: 0 %

Configuration Files

  • DeviceA configuration file
    #
    sysname DeviceA
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
    #
    nqa test-instance admin trace
     test-type trace
     destination-address ipv4 10.2.1.2
    #
    return
  • DeviceB configuration file
    #
    sysname DeviceB
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
    #
    interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
    #
    return
  • DeviceC configuration file
    #
    sysname DeviceC
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
    #
    ip route-static 10.1.1.0 255.255.255.0 10.2.1.1
    #
    return

Example for Configuring NQA to Measure the VoIP Service Jitter

This section provides an example for configuring an NQA UDP jitter test to measure the Voice over Internet Protocol (VoIP) service jitter.

Networking Requirements

On the network shown in Figure 18-103, the headquarters often holds teleconferences with its subsidiary through VoIP. It is required that the round-trip delay be less than 250 ms and the jitter be less than 20 ms. An NQA UDP jitter test can be configured to simulate a VoIP service and measure the service jitter.

Figure 18-103 Configuring NQA to measure the VoIP service jitter
  • In this example, interface1 represents GE0/1/0.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceD as the NQA client and DeviceA as the NQA server. Configure a UDP jitter test instance on DeviceD.

  2. Start the test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceA and DeviceD

  • Code type of the simulated VoIP service

Procedure

  1. Configure DeviceA (NQA server).

    <DeviceA> system-view
    [~DeviceA] nqa-server udpecho 10.1.1.1 4000
    [*DeviceA] commit

  2. Configure DeviceD (NQA client).
    1. Configure a packet version for the UDP jitter test instance to be created.

      <DeviceD> system-view
      [~DeviceD] nqa-jitter tag-version 2

    2. Create a UDP jitter test instance and set the destination IP address to the IP address of DeviceA.

      [*DeviceD] nqa test-instance admin udpjitter
      [*DeviceD-nqa-admin-udpjitter] test-type jitter
      [*DeviceD-nqa-admin-udpjitter] destination-address ipv4 10.1.1.1
      [*DeviceD-nqa-admin-udpjitter] destination-port 4000

    3. Specify a code type for simulated VoIP services.

      [*DeviceD-nqa-admin-udpjitter] jitter-codec g711a

  3. Start the test instance immediately.

    [*DeviceD-nqa-admin-udpjitter] start now
    [*DeviceD-nqa-admin-udpjitter] commit

  4. Verify the test result. The command output shows that the round-trip delay is less than 250 ms, and the jitter is less than 20 ms.

    [~DeviceD-nqa-admin-udpjitter] display nqa results test-instance admin udpjitter
    
    NQA entry(admin, udpjitter) :testflag is active ,testtype is jitter             
      1 . Test 1 result   The test is finished                                      
       SendProbe:1000                         ResponseProbe:919                     
       Completion:success                     RTD OverThresholds number:0           
       OWD OverThresholds SD number:0         OWD OverThresholds DS number:0        
       Min/Max/Avg/Sum RTT:1/408/5/4601       RTT  Square Sum:1032361               
       NumOfRTT:919                           Drop operation number:0               
       Operation sequence errors number:0     RTT Stats errors number:0             
       System busy operation number:0         Operation timeout number:81           
       Min Positive SD:1                      Min Positive DS:1                     
       Max Positive SD:2                      Max Positive DS:9                     
       Positive SD Number:67                  Positive DS Number:70                 
       Positive SD Sum:70                     Positive DS Sum:80                    
       Positive SD Square Sum:76              Positive DS Square Sum:156            
       Min Negative SD:1                      Min Negative DS:1                     
       Max Negative SD:24                     Max Negative DS:25                    
       Negative SD Number:72                  Negative DS Number:82                 
       Negative SD Sum:271                    Negative DS Sum:287                   
       Negative SD Square Sum:4849            Negative DS Square Sum:4937           
       Min Delay SD:0                         Min Delay DS:0                        
       Max Delay SD:203                       Max Delay DS:204                      
       Delay SD Square Sum:254974             Delay DS Square Sum:257072            
       Packet Loss SD:0                       Packet Loss DS:0                      
       Packet Loss Unknown:81                 Average of Jitter:2                   
       Average of Jitter SD:2                 Average of Jitter DS:2                
       jitter out value:0.0000000             jitter in value:0.0000000             
       NumberOfOWD:919                        Packet Loss Ratio:8 %                 
       OWD SD Sum:1834                        OWD DS Sum:1848                       
       ICPIF value:23                         MOS-CQ value:354                      
       TimeStamp unit:ms       
    

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    #
     nqa-server udpecho 10.1.1.1 4000
    #
    isis 1
     network-entity 00.0000.0000.0001.00  
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1 
    #
    return
  • DeviceD configuration file

    #
    sysname DeviceD
    #
    nqa-jitter tag-version 2
    #
    isis 1
     network-entity 00.0000.0000.0002.00  
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.2.2.1 255.255.255.0
     isis enable 1 
    #
    nqa test-instance admin udpjitter
     test-type jitter
     destination-address ipv4 10.1.1.1
     destination-port 4000
     jitter-codec g711a      
    #
    return

Example for Configuring NQA to Measure the VoIPv6 Service Jitter

This section provides an example for configuring an NQA UDP jitter test instance to measure the Voice over IPv6 (VoIPv6) service jitter.

Networking Requirements

On the network shown in Figure 18-103, the headquarters often holds teleconferences with its subsidiary through VoIPv6. It is required that the round-trip delay be less than 250 ms and the jitter be less than 20 ms. An NQA UDP jitter test can be configured to simulate a VoIPv6 service and measure the service jitter.

Figure 18-104 Configuring NQA to measure the VoIPv6 service jitter
  • In this example, interface1 represents GE0/1/0.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceD as the NQA client and DeviceA as the NQA server. Configure a UDP jitter test instance on DeviceD.

  2. Start the test instance.

Data Preparation

To complete the configuration, you need the following data:

  • IPv6 addresses of DeviceA and DeviceD

  • Code type of the simulated VoIPv6 service

Procedure

  1. Configure DeviceA (NQA server).

    <DeviceA> system-view
    [~DeviceA] nqa-server udpecho ipv6 2001:db8:1::1 4000
    [*DeviceA] commit

  2. Configure DeviceD (NQA client).
    1. Configure a packet version for the UDP jitter test instance to be created.

      <DeviceD> system-view
      [~DeviceD] nqa-jitter tag-version 2

    2. Create a UDP jitter test instance and set the destination address of the test instance to the IPv6 address of DeviceA.

      [*DeviceD] nqa test-instance admin udpjitter
      [*DeviceD-nqa-admin-udpjitter] test-type jitter
      [*DeviceD-nqa-admin-udpjitter] destination-address ipv6 2001:db8:1::1
      [*DeviceD-nqa-admin-udpjitter] destination-port 4000

    3. Specify a code type for the simulated VoIPv6 service.

      [*DeviceD-nqa-admin-udpjitter] jitter-codec g711a

  3. Start the test instance.

    [*DeviceD-nqa-admin-udpjitter] start now
    [*DeviceD-nqa-admin-udpjitter] commit

  4. Verify the test result. The command output shows that the round-trip delay is less than 250 ms, and the jitter is less than 20 ms.

    [~DeviceD-nqa-admin-udpjitter] display nqa results test-instance admin udpjitter
    NQA entry(admin, udpjitter) :testflag is active ,testtype is jitter             
      1 . Test 1 result   The test is finished                                      
       SendProbe:1000                         ResponseProbe:919                     
       Completion:success                     RTD OverThresholds number:0           
       OWD OverThresholds SD number:0         OWD OverThresholds DS number:0        
       Min/Max/Avg/Sum RTT:1/408/5/4601       RTT  Square Sum:1032361               
       NumOfRTT:919                           Drop operation number:0               
       Operation sequence errors number:0     RTT Stats errors number:0             
       System busy operation number:0         Operation timeout number:81           
       Min Positive SD:1                      Min Positive DS:1                     
       Max Positive SD:2                      Max Positive DS:9                     
       Positive SD Number:67                  Positive DS Number:70                 
       Positive SD Sum:70                     Positive DS Sum:80                    
       Positive SD Square Sum:76              Positive DS Square Sum:156            
       Min Negative SD:1                      Min Negative DS:1                     
       Max Negative SD:24                     Max Negative DS:25                    
       Negative SD Number:72                  Negative DS Number:82                 
       Negative SD Sum:271                    Negative DS Sum:287                   
       Negative SD Square Sum:4849            Negative DS Square Sum:4937           
       Min Delay SD:0                         Min Delay DS:0                        
       Max Delay SD:203                       Max Delay DS:204                      
       Delay SD Square Sum:254974             Delay DS Square Sum:257072            
       Packet Loss SD:0                       Packet Loss DS:0                      
       Packet Loss Unknown:81                 Average of Jitter:2                   
       Average of Jitter SD:2                 Average of Jitter DS:2                
       jitter out value:0.0000000             jitter in value:0.0000000             
       NumberOfOWD:919                        Packet Loss Ratio:8 %                 
       OWD SD Sum:1834                        OWD DS Sum:1848                       
       ICPIF value:23                         MOS-CQ value:354                      
       TimeStamp unit:ms       

Configuration Files

  • DeviceA configuration file

    #
     sysname DeviceA
    #
     nqa-server udpecho ipv6 2001:db8:1::1 4000
    #
    isis 1
     network-entity 00.0000.0000.0001.00  
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ipv6 address 2001:db8:1::1/128
     isis enable 1 
    #
    return
  • DeviceD configuration file

    #
     sysname DeviceD
    #
    nqa-jitter tag-version 2
    #
    isis 1
     network-entity 00.0000.0000.0001.00  
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ipv6 address 2001:db8:2::1/128
     isis enable 1 
    #
    nqa test-instance admin udpjitter
     test-type jitter
     destination-address ipv6 2001:db8:1::1
     destination-port 4000
     jitter-codec g711a
    #
    return

Example for Configuring an NQA LSP Ping Test to Check MPLS Network Connectivity

This section provides an example for configuring an NQA LSP ping test to check MPLS network connectivity.

Networking Requirements

On the MPLS network shown in Figure 18-105, DeviceA and DeviceC are PEs. It is required that the connectivity between DeviceA and DeviceC be checked periodically.

Figure 18-105 Configuring an NQA LSP ping test to check MPLS network connectivity
  • In this example, interfaces 1 and 2 represent GE0/1/0 and GE0/2/0, respectively.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Create an LSP ping test instance on DeviceA.
  2. Start the test instance.

Data Preparation

To complete the configuration, you need the IP addresses of DeviceA and DeviceC.

Procedure

  1. Create an LSP ping test instance.

    <DeviceA> system-view
    [~DeviceA] nqa test-instance admin lspping
    [*DeviceA-nqa-admin-lspping] test-type lspping
    [*DeviceA-nqa-admin-lspping] lsp-type ipv4
    [*DeviceA-nqa-admin-lspping] destination-address ipv4 3.3.3.9 lsp-masklen 32

  2. Start the test instance.

    [*DeviceA-nqa-admin-lspping] start now
    [*DeviceA-nqa-admin-lspping] commit

  3. Verify the test result.

    [~DeviceA-nqa-admin-lspping] display nqa results test-instance admin lspping
     NQA entry(admin, lspping) :testFlag is inactive ,testtype is  lspping
      1 . Test 1 result   The test is finished
       Send operation times: 3              Receive response times: 3
       Completion:success                   RTD OverThresholds number: 0
       Attempts number:1                    Drop operation number:0
       Disconnect operation number:0        Operation timeout number:0
       System busy operation number:0       Connection fail number:0
       Operation sequence errors number:0   RTT Stats errors number:0
       Destination ip address:3.3.3.9
       Min/Max/Average Completion Time: 1/1/1
       Sum/Square-Sum  Completion Time: 3/3
       Last Good Probe Time: 2007-1-30 15:32:56.1
       Lost packet ratio:0% 

  4. Configure the device to perform the test at 10:00 every day.

    [*DeviceA-nqa-admin-lspping] stop
    [*DeviceA-nqa-admin-lspping] start daily 10:00:00 to 10:30:00  
    [*DeviceA-nqa-admin-lspping] commit

Configuration Files

  • DeviceA configuration file

    #
     sysname DeviceA
    #
    mpls lsr-id 1.1.1.9
    #
    mpls
    #
    mpls ldp
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
    #
    ospf 1
     area 0.0.0.0
      network 10.1.1.0 0.0.0.255
      network 1.1.1.9 0.0.0.0
    #
    nqa test-instance admin lspping
     test-type lspping
     destination-address ipv4 3.3.3.9 lsp-masklen 32
     start daily 10:00:00 to 10:30:00  
    #
    return
  • DeviceB configuration file

    #
     sysname DeviceB
    #
    mpls lsr-id 2.2.2.9
    #
    mpls
    #
    mpls ldp
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     mpls
     mpls ldp
    #
    interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.9 0.0.0.0
      network 10.1.1.0 0.0.0.255
      network 10.2.1.0 0.0.0.255
    #
    return
  • DeviceC configuration file

    #
     sysname DeviceC
    #
    mpls lsr-id 3.3.3.9
    #
    mpls
    #
    mpls ldp
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     mpls
     mpls ldp
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.9 0.0.0.0
      network 10.2.1.0 0.0.0.255
    #
    return

Example for Configuring an NQA PWE3 Ping Test to check PW Connectivity on a VPWS Network

This section provides an example for configuring an NQA PWE3 ping test to check PW connectivity on a virtual private wire service (VPWS) network.

Networking Requirements

On the network shown in Figure 18-106, CE-A and CE-B run PPP to access U-PE1 and U-PE2, respectively. U-PE1 and U-PE2 are connected over an MPLS backbone network. A dynamic multi-segment PW between U-PE1 and U-PE2 is established over a label switched path (LSP), with an S-PE functioning as a switching node.

The PWE3 ping function can be configured to check the connectivity of the multi-segment PW between U-PE1 and U-PE2.

Figure 18-106 VPWS networking
  • In this example, interfaces 1 and 2 represent GE0/1/0 and GE0/2/0, respectively.

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure an IGP on the backbone network for communication between routers on the network.

  2. Enable basic MPLS functions over the backbone network and set up LSPs. Establish a remote MPLS Label Distribution Protocol (LDP) peer relationship between U-PE1 and S-PE, and between U-PE2 and S-PE.

  3. Set up an MPLS Layer 2 virtual circuit (L2VC) connection between U-PEs.

  4. Set up a switched PW on the switching node S-PE.

  5. Configure a PWE3 ping test instance for the multi-segment PW on U-PE1.

Data Preparation

To complete the configuration, you need the following data:

  • Different L2VC IDs of U-PE1 and U-PE2

  • MPLS LSR IDs of U-PE1, S-PE, and U-PE2

  • IP address of the remote peer

  • Encapsulation type of the switched PW

  • Name of the PW template configured on each U-PE and template parameters

Procedure

  1. Configure a dynamic multi-segment PW.

    Configure a dynamic multi-segment PW on the MPLS backbone network.

    For details, see "VPWS Configuration" in HUAWEI NetEngine 8100 M14/M8, NetEngine 8000 M14K/M14/M8K/M8/M4 & NetEngine 8000E M14/M8 seriesNetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M Configuration Guide > VPN.

  2. Configure a PWE3 ping test instance for the multi-segment PW.

    # Configure U-PE1.

    [*U-PE1] nqa test-instance test pwe3ping
    [*U-PE1-nqa-test-pwe3ping] test-type pwe3ping
    [*U-PE1-nqa-test-pwe3ping] local-pw-id 100
    [*U-PE1-nqa-test-pwe3ping] local-pw-type ppp
    [*U-PE1-nqa-test-pwe3ping] label-type control-word
    [*U-PE1-nqa-test-pwe3ping] remote-pw-id 200
    [*U-PE1-nqa-test-pwe3ping] commit

  3. Start the test instance.

    [*U-PE1-nqa-test-pwe3ping] start now

  4. Verify the test result.

    Run the display nqa results command on the PE. The command output shows that the test is successful.

    [*U-PE1-nqa-test-pwe3ping] display nqa results
    NQA entry(lh, 11) :testflag is inactive ,testtype is pwe3ping
      1 . Test 1 result   The test is finished 
       SendProbe:3                            ResponseProbe:3                       
       Completion:success                     RTD OverThresholds number:0           
       OWD OverThresholds SD number:0         OWD OverThresholds DS number:0        
       Min/Max/Avg/Sum RTT:3/5/4/11           RTT Square Sum:43                     
       NumOfRTT:3                             Drop operation number:0               
       Operation sequence errors number:0     RTT Status errors number:0            
       System busy operation number:0         Operation timeout number:0            
       Min Positive SD:0                      Min Positive DS:1                     
       Max Positive SD:0                      Max Positive DS:1                     
       Positive SD Number:0                   Positive DS Number:1                  
       Positive SD Sum:0                      Positive DS Sum:1                     
       Positive SD Square Sum:0               Positive DS Square Sum:1              
       Min Negative SD:1                      Min Negative DS:1                     
       Max Negative SD:2                      Max Negative DS:1                     
       Negative SD Number:2                   Negative DS Number:1                  
       Negative SD Sum:3                      Negative DS Sum:1                     
       Negative SD Square Sum:5               Negative DS Square Sum:1              
       Min Delay SD:0                         Min Delay DS:0                        
       Max Delay SD:0                         Max Delay DS:0                        
       Delay SD Square Sum:0                  Delay DS Square Sum:0                 
       Packet Loss SD:0                       Packet Loss DS:0                      
       Packet Loss Unknown:0                  Average of Jitter:1                   
       Average of Jitter SD:1                 Average of Jitter DS:1                
       Jitter out value:0.1015625             Jitter in value:0.0611979             
       NumberOfOWD:0                          Packet Loss Ratio:0 %                 
       OWD SD Sum:0                           OWD DS Sum:0                          
       ICPIF value:0                          MOS-CQ value:0                        
       Attempts number:1                      Disconnect operation number:0         
       Connection fail number:0               Destination ip address:10.4.1.2        
       Last Good Probe Time: 2016-11-15 20:33:43.8

Configuration Files

  • CE-A configuration file

    #
     sysname CE-A
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.10.1.1 255.255.255.0
    #
     return
  • U-PE1 configuration file

    #
     sysname U-PE1
    #
     mpls lsr-id 1.1.1.9
     mpls
    #
     mpls l2vpn
    #
     mpls ldp
    #
     mpls ldp remote-peer 3.3.3.9
     remote-ip 3.3.3.9  
    #
     pw-template wwt
     peer-address 3.3.3.9
     control-word
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     mpls l2vc 3.3.3.9 pw-template wwt 100   
    #
     interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     mpls
     mpls ldp
    #
     interface LoopBack0
     ip address 1.1.1.9 255.255.255.255
    #
     nqa test-instance test pwe3ping
     test-type pwe3ping
     local-pw-id 100
     local-pw-type ppp
     remote-pw-id 200
    #
     ospf 1
     area 0.0.0.0
     network 10.1.1.0 0.0.0.255
     network 1.1.1.9 0.0.0.0
    #
     return
  • P1 configuration file

    #
     sysname P1
    #
     mpls lsr-id 2.2.2.9
     mpls
    #
     mpls ldp
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     mpls
     mpls ldp
    #
     interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.2.1.1 255.255.255.0
     mpls
     mpls ldp
    #
     interface LoopBack0
     undo shutdown
     ip address 2.2.2.9 255.255.255.255
    #
     ospf 1
     area 0.0.0.0
     network 2.2.2.9 0.0.0.0
     network 10.1.1.0 0.0.0.255
     network 10.2.1.0 0.0.0.255
    #
     return
  • S-PE configuration file

    #
     sysname S-PE
    #
     mpls lsr-id 3.3.3.9
     mpls
    #
     mpls l2vpn
    #
     mpls switch-l2vc 5.5.5.9 200 between 1.1.1.9 100 encapsulation ethernet
    #
     mpls ldp
    #
     mpls ldp remote-peer 1.1.1.9
     remote-ip 1.1.1.9
    #
     mpls ldp remote-peer 5.5.5.9
     remote-ip 5.5.5.9
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.2.1.2 255.255.255.0
     mpls
     mpls ldp
    #
     interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.3.1.1 255.255.255.0
     mpls
     mpls ldp
    #
     interface LoopBack0
     undo shutdown
     ip address 3.3.3.9 255.255.255.255
    #
     ospf 1
     area 0.0.0.0
     network 3.3.3.9 0.0.0.0
     network 10.2.1.0 0.0.0.255
     network 10.3.1.0 0.0.0.255
    #
     return
  • P2 configuration file

    #
     sysname P2
    #
     mpls lsr-id 4.4.4.9
     mpls
    #
     mpls ldp
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.3.1.2 255.255.255.0
     mpls
     mpls ldp
    #
     interface GigabitEthernet0/2/0
     undo shutdown
     ip address 10.4.1.1 255.255.255.0
     mpls
     mpls ldp
    #
     interface LoopBack0
     undo shutdown
     ip address 4.4.4.9 255.255.255.255
    #
     ospf 1
     area 0.0.0.0
     network 4.4.4.9 0.0.0.0
     network 10.3.1.0 0.0.0.255
     network 10.4.1.0 0.0.0.255
    #
  • U-PE2 configuration file

    #
     sysname U-PE2
    #
     mpls lsr-id 5.5.5.9
     mpls
    #
     mpls l2vpn
    #
     mpls ldp
    #
     mpls ldp remote-peer 3.3.3.9
     remote-ip 3.3.3.9
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.4.1.2 255.255.255.0
     mpls
     mpls ldp
    #
     pw-template wwt
     peer-address 3.3.3.9
     control-word
    #
     interface GigabitEthernet0/2/0
     undo shutdown
     mpls l2vc 3.3.3.9 pw-template wwt 200   
    #
     interface LoopBack0
     ip address 5.5.5.9 255.255.255.255
    #
     ospf 1
     area 0.0.0.0
     network 5.5.5.9 0.0.0.0
     network 10.4.1.0 0.0.0.255
    #
     return
  • CE-B configuration file

    #
     sysname CE-B
    #
     interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.10.1.2 255.255.255.0
    #
     return

Example for Configuring an NQA Test Group

You can monitor the status of multiple links by binding an NQA test group to multiple NQA test instances.

Networking Requirements

As shown in Figure 18-107, default routes are configured on DeviceA to import traffic from DeviceC to DeviceB1 and DeviceB2. The default routes are associated with an NQA test group that is bound to ICMP test instances test1 and test2 on DeviceA.

Figure 18-107 NQA test group detection

In this example, interface1 and interface2 represent GE 0/1/1 and GE 0/1/2, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure NQA test instances and start the tests.
  2. Configure an NQA test group.

  3. Bind the NQA test group to test instances.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of DeviceB1 and DeviceB2.

Procedure

  1. Configure NQA test instances and start the tests.

    <DeviceA> system-view
    [~DeviceA] nqa test-instance admin1 test1
    [~DeviceA-nqa-admin1-test1] test-type icmp
    [*DeviceA-nqa-admin1-test1] destination-address ipv4 10.1.1.1
    [*DeviceA-nqa-admin1-test1] frequency 15
    [*DeviceA-nqa-admin1-test1] start now
    [*DeviceA-nqa-admin1-test1] commit
    [~DeviceA-nqa-admin1-test1] quit
    [~DeviceA] nqa test-instance admin2 test2
    [~DeviceA-nqa-admin2-test2] test-type icmp
    [*DeviceA-nqa-admin2-test2] destination-address ipv4 10.2.2.2
    [*DeviceA-nqa-admin2-test2] frequency 15
    [*DeviceA-nqa-admin2-test2] start now
    [*DeviceA-nqa-admin2-test2] commit
    [~DeviceA-nqa-admin2-test2] quit

  2. Configure an NQA test group.

    [~DeviceA] nqa group group1
    [*DeviceA-nqa-group-group1] operator and
    [*DeviceA-nqa-group-group1] description this is an nqa group
    [*DeviceA-nqa-group-group1] commit

  3. Bind the NQA test group to test instances.

    [~DeviceA-nqa-group-group1] nqa test-instance admin1 test1
    [*DeviceA-nqa-group-group1] nqa test-instance admin2 test2
    [*DeviceA-nqa-group-group1] commit
    [~DeviceA-nqa-group-group1] quit

  4. Verify the configuration.

    [~DeviceA] display nqa group
    NQA-group information:
    ------------------------------------------------------------------------
    NQA-group group1
    Status: DOWN       Operator: AND     
    ------------------------------------------------------------------------
    Admin-name                       Test-name                        Status
    ------------------------------------------------------------------------
    admin1                           test1                            DOWN    
    admin2                           test2                            DOWN    

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    #
    interface GigabitEthernet0/1/1
     ip address 192.168.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     ip address 192.168.1.2 255.255.255.0
    #
    nqa test-instance admin1 test1
     test-type icmp
     destination-address ipv4 10.1.1.1
     frequency 15
     start now
    #
    nqa test-instance admin2 test2
     test-type icmp
     destination-address ipv4 10.2.2.2
     frequency 15
     start now
    #
    nqa group group1
     description this is an nqa group
     operator and
     nqa test-instance admin1 test1
     nqa test-instance admin2 test2
    #
    return
  • DeviceB1 configuration file
    #
    sysname DeviceB1
    #
    interface GigabitEthernet0/1/1
     ip address 10.1.1.1 255.255.255.0
    #
    return
  • DeviceB2 configuration file
    #
    sysname DeviceB2
    #
    interface GigabitEthernet0/1/1
     ip address 10.2.2.2 255.255.255.0
    #
    return

Example for Configuring a Generalflow Test in a Native Ethernet Scenario (RFC 2544)

This section describes how to configure a generalflow test to measure the performance of a native Ethernet network.

Networking Requirements

On the network shown in Figure 18-108, the performance of the Ethernet virtual connection (EVC) between DeviceA and DeviceB is measured.

Figure 18-108 Generalflow test in a native Ethernet scenario

In this example, interfaces 1 and 2 represent GE0/1/1 and GE0/1/2, respectively.


Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceB as the reflector and configure it to reflect traffic with a specified destination MAC address through GE0/1/1 (reflector interface) to the initiator.
  2. Configure DeviceA as the initiator and configure it to perform throughput, delay, and packet loss rate tests.
  3. Add the interfaces of DeviceC (intermediate device) to a specified VLAN.

Data Preparation

To complete the configuration, you need the following data:

  • Configurations on DeviceB (reflector): MAC address of UNI-B (00e0-fc12-3456)
  • Configurations on DeviceA (initiator):

    • Destination MAC address: 00e0-fc12-3456 (MAC address of UNI-B)
    • Throughput test parameters: upper and lower thresholds of the packet sending rate (100000 kbit/s and 10000 kbit/s, respectively), throughput precision (1000 kbit/s), packet loss precision (81/10000), timeout period of each packet sending rate (5s), packet sending size (70 bytes), and test instance execution duration (100s)
    • Delay test parameters: packet rate (99000 Kbit/s), test duration (100s), and interval (5s) at which the initiator sends test packets
    • Packet loss rate test parameters: packet rate (99000 Kbit/s), and test duration (100s)

Procedure

  1. Configure reachable Layer 2 links between the initiator and reflector and add Layer 2 interfaces to VLAN 10.
  2. Configure the reflector.

    <DeviceB> system-view
    [~DeviceB] nqa reflector 1 interface gigabitethernet 0/1/1 mac 00e0-fc12-3456 vlan 10

  3. Configure the initiator to perform a throughput test and check the test results.

    <DeviceA> system-view
    [~DeviceA] nqa test-instance admin throughput
    [*DeviceA-nqa-admin-throughput] test-type generalflow
    [*DeviceA-nqa-admin-throughput] measure throughput
    [*DeviceA-nqa-admin-throughput] destination-address mac 00e0-fc12-3456
    [*DeviceA-nqa-admin-throughput] forwarding-simulation inbound-interface gigabitethernet 0/1/1
    [*DeviceA-nqa-admin-throughput] rate 10000 100000
    [*DeviceA-nqa-admin-throughput] interval seconds 5
    [*DeviceA-nqa-admin-throughput] precision 1000
    [*DeviceA-nqa-admin-throughput] fail-ratio 81
    [*DeviceA-nqa-admin-throughput] datasize 70
    [*DeviceA-nqa-admin-throughput] duration 100
    [*DeviceA-nqa-admin-throughput] vlan 10
    [*DeviceA-nqa-admin-throughput] start now
    [*DeviceA-nqa-admin-throughput] commit
    [~DeviceA-nqa-admin-throughput] display nqa results test-instance admin throughput
    
    NQA entry(admin, throughput) :testflag is inactive ,testtype is generalflow
      1 . Test 1 result: The test is finished, test mode is throughput
       Total time:17s, path-learning time:1s, test time:13s
       Ethernet frame rate: utilized line rate(L1 rate)
       ID Size Throughput(Kbps) Precision(Kbps) LossRatio    Completion
       1  70   100000           1000            0.0000000%   success
       Start time: 2023-09-04 18:25:02.6
       End   time: 2023-09-04 18:25:19.9

  4. Configure the initiator to perform a delay test and check the test results.

    [*DeviceA] nqa test-instance admin delay
    [*DeviceA-nqa-admin-delay] test-type generalflow
    [*DeviceA-nqa-admin-delay] measure delay
    [*DeviceA-nqa-admin-delay] destination-address mac 00e0-fc12-3456
    [*DeviceA-nqa-admin-delay] forwarding-simulation inbound-interface gigabitethernet 0/1/1
    [*DeviceA-nqa-admin-delay] datasize 64
    [*DeviceA-nqa-admin-delay] rate 99000
    [*DeviceA-nqa-admin-delay] interval seconds 5
    [*DeviceA-nqa-admin-delay] duration 100
    [*DeviceA-nqa-admin-delay] vlan 10
    [*DeviceA-nqa-admin-delay] start now
    [*DeviceA-nqa-admin-delay] commit
    [~DeviceA-nqa-admin-delay] display nqa results test-instance admin delay
    NQA entry(admin, delay) :testflag is inactive ,testtype is generalflow
      1 . Test 1 result: The test is finished, test mode is delay
       Total time:110s, path-learning time:1s, test time:106s
       ID Size Min/Max/Avg RTT(us)     Min/Max/Avg Jitter(us)  Completion
       1  64   149/162/154             0/11/3                  finished
       Start time: 2023-09-04 19:55:30.5
       End   time: 2023-09-04 19:57:20.6

  5. Configure the initiator to perform a packet loss rate test and check the test results.

    [*DeviceA] nqa test-instance admin loss
    [*DeviceA-nqa-admin-loss] test-type generalflow
    [*DeviceA-nqa-admin-loss] measure loss
    [*DeviceA-nqa-admin-loss] destination-address mac 00e0-fc12-3456
    [*DeviceA-nqa-admin-loss] forwarding-simulation inbound-interface gigabitethernet 0/1/1
    [*DeviceA-nqa-admin-loss] datasize 64
    [*DeviceA-nqa-admin-loss] rate 99000
    [*DeviceA-nqa-admin-loss] duration 100
    [*DeviceA-nqa-admin-loss] vlan 10
    [*DeviceA-nqa-admin-loss] start now
    [*DeviceA-nqa-admin-loss] commit
    [~DeviceA-nqa-admin-loss] display nqa results test-instance admin loss
    
    NQA entry(admin, loss) :testflag is inactive ,testtype is generalflow
      1 . Test 1 result: The test is finished, test mode is loss
       Total time:112s, path-learning time:1s, test time:108s
       Ethernet frame rate: utilized line rate(L1 rate)
       ID Size TxRate/RxRate(Kbps) TxCount/RxCount             LossRatio    Completion
       1  64   99000/99000         29464286/29464286           0.0000000%   finished
       Start time: 2023-09-04 19:29:37.0
       End   time: 2023-09-04 19:31:28.6

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    # 
    vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    nqa test-instance admin throughput
     test-type generalflow
     duration 100
     measure throughput
     fail-ratio 81
     destination-address mac 00e0-fc12-3456
     datasize 70
     rate 10000 100000
     precision 1000
     forwarding-simulation inbound-interface GigabitEthernet0/1/1
     vlan 10
    nqa test-instance admin loss
     test-type generalflow
     duration 100
     measure loss
     destination-address mac 00e0-fc12-3456
     datasize 64
     rate 99000
     forwarding-simulation inbound-interface GigabitEthernet0/1/1
     vlan 10
    nqa test-instance admin delay
     test-type generalflow
     duration 100
     measure delay
     interval seconds 5
     destination-address mac 00e0-fc12-3456
     datasize 64
     rate 99000
     forwarding-simulation inbound-interface GigabitEthernet0/1/1
     vlan 10
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    # 
    vlan 10
    nqa reflector 1 interface gigabitethernet 0/1/1 mac 00e0-fc12-3456 vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    # 
    vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return

Example for Configuring a Generalflow Test in an IP Gateway Scenario (RFC 2544)

This section describes how to configure a generalflow test to measure the performance of an Ethernet network in an IP gateway scenario.

Usage Scenario

A generalflow test needs to be configured to measure the performance of the Ethernet network shown in Figure 18-109 between DeviceA and DeviceB (IP gateway).

Figure 18-109 Configuring a generalflow test in a Layer 2 accessing Layer 3 scenario

In this example, interfaces 1 and 2 represent GE0/1/1 and GE0/1/2, respectively.



Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure DeviceA as the reflector.
  2. Configure DeviceB as the initiator and configure it to perform a delay test.

Data Preparation

To complete the configuration, you need the following data:

  • Configurations on DeviceA (reflector): simulated IP address 10.1.1.1 (CE's IP address) and reflector interface (GE0/1/1).
  • Configurations on DeviceB (initiator):

    • Destination IP address: 10.1.1.1 (IP address of the CE interface connected to reflector's GE0/1/1

    • Source IP address: an address that resides on the same network segment as the IP address of the initiator
    • Delay test parameters: packet rate (99000 kbit/s), test duration (100s), and interval (5s) for sending test packets

Procedure

  1. Configure the CE and DeviceB to communicate through the Layer 2 device between them and ensure that the Layer 3 link is reachable.
  2. Configure the reflector.

    [*DeviceA] vlan 10
    [*DeviceA-vlan10] commit
    [~DeviceA-vlan10] quit
    [~DeviceA] nqa reflector 1 interface GigabitEthernet 0/1/1 ipv4 10.1.1.1 vlan 10 
    [*DeviceA] commit

  3. Configure the initiator to perform a delay test and check the test results.

    [*DeviceB] vlan 10
    [*DeviceB-vlan10] commit
    [~DeviceB-vlan10] quit
    [~DeviceB] interface gigabitethernet 0/1/2.1
    [*DeviceB-GigabitEthernet0/1/2.1] vlan-type dot1q 10
    [*DeviceB-GigabitEthernet0/1/2.1] ip address 10.1.1.2 24
    [*DeviceB-GigabitEthernet0/1/2.1] quit
    [*DeviceB] arp static 10.1.1.1 00e0-fc12-3456 vid 10 interface GigabitEthernet 0/1/2.1
    [*DeviceB] nqa test-instance admin delay
    [*DeviceB-nqa-admin-delay] test-type generalflow
    [*DeviceB-nqa-admin-delay] measure delay
    [*DeviceB-nqa-admin-delay] destination-address ipv4 10.1.1.1
    [*DeviceB-nqa-admin-delay] source-address ipv4 10.1.1.2
    [*DeviceB-nqa-admin-delay] source-interface gigabitethernet 0/1/2.1
    [*DeviceB-nqa-admin-delay] rate 99000
    [*DeviceB-nqa-admin-delay] interval seconds 5
    [*DeviceB-nqa-admin-delay] datasize 64
    [*DeviceB-nqa-admin-delay] duration 100
    [*DeviceB-nqa-admin-delay] start now
    [*DeviceB-nqa-admin-delay] commit
    [~DeviceB-nqa-admin-delay] display nqa results test-instance admin delay
    
    NQA entry(admin, delay) :testflag is inactive ,testtype is generalflow
      1 . Test 1 result: The test is finished, test mode is delay
       Total time:751s, path-learning time:1s, test time:747s
       ID Size Min/Max/Avg RTT(us)     Min/Max/Avg Jitter(us)  Completion
       1  64   97/107/102              0/6/3                   finished
       2  128  97/106/100              0/6/2                   finished
       3  256  96/103/100              0/6/2                   finished
       4  512  98/109/102              0/8/2                   finished
       5  1024 100/106/103             0/4/1                   finished
       6  1280 103/109/105             0/5/2                   finished
       7  1518 105/110/107             0/3/1                   finished
       Start time: 2024-02-21 15:06:54.8
       End   time: 2024-02-21 15:19:26.2

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    # 
    vlan 10
    #
    nqa reflector 1 interface GigabitEthernet 0/1/1 ipv4 10.1.1.1 vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    #
    vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2.1
     vlan-type dot1q 10
     ip address 10.1.1.2 255.255.255.0
    #
    arp static 10.1.1.1 00e0-fc12-3456 vid 10 interface GigabitEthernet 0/1/2.1
    nqa test-instance admin delay
     test-type generalflow
     destination-address ipv4 10.1.1.1
     source-address ipv4 10.1.1.2
     duration 100
     measure delay
     interval seconds 5
     datasize 64
     rate 99000
     source-interface GigabitEthernet 0/1/2.1
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    # 
    vlan 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return

Example for Configuring an Ethernet Service Activation Test in a Layer 2 Scenario (Y.1564)

This section provides an example for configuring an Ethernet service activation test in a Layer 2 scenario to check whether network performance meets SLAs.

Networking Requirements

On the network shown in Figure 18-110, it is required that Ethernet frame transmission between DeviceB and DeviceC be checked to determine whether the performance parameters meet SLAs.

Figure 18-110 Configuring an Ethernet service activation test in a Layer 2 scenario
  • In this example, interfaces 1 and 2 represent GE0/1/1 and GE0/1/2, respectively.


Configuration Roadmap

  1. Configure DeviceC as the reflector and configure it to perform flow-based traffic filtering, with the reflector interface being GE0/1/1.
  2. Configure DeviceB as the initiator and configure it to perform configuration and performance tests.

Data Preparation

To complete the configuration, you need the following data:

  • Configurations of the reflector (DeviceC): MAC address (00e0-fc12-3457) of GE0/1/1 on DeviceD connected to UNI B; the function to reflect packets based on flows
  • Configurations of the initiator (DeviceB)

    • Service flow configurations and characteristics:

      • Destination MAC address: 00e0-fc12-3457 (MAC address of GE0/1/1 on DeviceD connected to UNI B)
      • Source MAC address: 00e0-fc12-3456 (MAC address of GE0/1/1 on DeviceA connected to UNI A)
      • VLAN ID in single-tagged Ethernet packets: 10
      • UDP destination port number: 1234
      • UDP source port number: 5678
    • Bandwidth profile: 10000 kbit/s for both the CIR and EIR

    • Service acceptance criteria: 1000/100000 for the FLR and 1000 microseconds for both the FTD and FDV

    • Enabling of the simple CIR test, traffic policing test, and color mode

    • Ethernet service activation test instance

Procedure

  1. Configure a reachable link between the initiator and reflector and add Layer 2 interfaces to VLAN 10.
  2. Configure the reflector.

    [*DeviceC] nqa test-flow 1
    [*DeviceC-nqa-testflow-1] vlan 10
    [*DeviceC-nqa-testflow-1] udp destination-port 1234
    [*DeviceC-nqa-testflow-1] udp source-port 5678
    [*DeviceC-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
    [*DeviceC-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
    [*DeviceC-nqa-testflow-1] quit
    [*DeviceC] nqa reflector 1 interface GigabitEthernet 0/1/1 test-flow 1 exchange-port agetime 0
    [*DeviceC] commit

  3. Configure the initiator to perform configuration and performance tests and check the test results.

    [*DeviceB] nqa test-flow 1
    [*DeviceB-nqa-testflow-1] vlan 10
    [*DeviceB-nqa-testflow-1] udp destination-port 1234
    [*DeviceB-nqa-testflow-1] udp source-port 5678
    [*DeviceB-nqa-testflow-1] cir simple-test enable
    [*DeviceB-nqa-testflow-1] bandwidth cir 10000 eir 10000
    [*DeviceB-nqa-testflow-1] sac flr 1000 ftd 1000 fdv 1000
    [*DeviceB-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
    [*DeviceB-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
    [*DeviceB-nqa-testflow-1] traffic-policing test enable
    [*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
    [*DeviceB-nqa-testflow-1] quit
    [*DeviceB] nqa test-instance admin ethernet
    [*DeviceB-nqa-admin-ethernet] test-type ethernet-service
    [*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 0/1/1
    [*DeviceB-nqa-admin-ethernet] test-flow 1
    [*DeviceB-nqa-admin-ethernet] start now
    [*DeviceB-nqa-admin-ethernet] commit
    [~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
    NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
      1 . Test 1 result   The test is finished                                      
       Status           : Pass                                                      
       Test-flow number : 1                                                         
       Mode             : Round-trip                                                
       Last step        : Performance-test                                          
       Estimated total time  :6
       Real test time        :6
       1 . Configuration-test                                                       
        Test-flow 1, CIR simple test                                                
         Begin                    : 2014-06-25 16:22:45.8                           
         End                      : 2014-06-25 16:22:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9961/10075/10012                                
         Min/Max/Mean FTD(us)     : 99/111/104                                      
         Min/Max/Mean FDV(us)     : 0/7/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Green                                            
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9979/10054/10012                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/10/3                                          
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Yellow                                           
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)    : 9979/10057/10013                                
         Min/Max/Mean FTD(us)     : 98/111/104                                      
         Min/Max/Mean FDV(us)     : 1/11/5                                          
         FL Count/FLR             : 0/0.000%                                        
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%
        Test-flow 1, Traffic policing test, Green                                   
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 10039/10054/10045                               
         Min/Max/Mean FTD(us)     : 96/110/104                                      
         Min/Max/Mean FDV(us)     : 1/9/4                                           
         FL Count/FLR             : 0/0.000% 
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                       
        Test-flow 1, Traffic policing test, Yellow                                  
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)    : 12544/12566/12554                               
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 1/8/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
       2 . Performance-test                                                         
        Test-flow 1, Performance-test                                               
         Begin                    : 2014-06-25 16:24:15.8                           
         End                      : 2014-06-25 16:39:15.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9888/10132/10004                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/8/2                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    # 
    vlan batch 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    # 
    vlan batch 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    nqa test-flow 1
     vlan 10
     udp destination-port 1234
     udp source-port 5678
     cir simple-test enable
     bandwidth cir 10000 eir 10000
     sac flr 1000 ftd 1000 fdv 1000
     traffic-type mac destination 00e0-fc12-3457
     traffic-type mac source 00e0-fc12-3456
     traffic-policing test enable
     color-mode 8021p green 0 7 yellow 0 7
    #
    nqa test-instance admin ethernet
     test-type ethernet-service
     forwarding-simulation inbound-interface GigabitEthernet 0/1/1
     test-flow 1
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    #
    vlan batch 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet 0/1/2
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    nqa test-flow 1
     vlan 10
     udp destination-port 1234
     udp source-port 5678
     traffic-type mac destination 00e0-fc12-3457
     traffic-type mac source 00e0-fc12-3456
    #
    nqa reflector 1 interface GigabitEthernet 0/1/1 test-flow 1 exchange-port agetime 0
    #
    return
  • DeviceD configuration file

    #
    sysname DeviceD
    # 
    vlan batch 10
    #
    interface GigabitEthernet 0/1/1
     portswitch
     undo shutdown
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    return

Example for Configuring an Ethernet Service Activation Test in a Layer 3 Scenario (Y.1564)

This section provides an example for configuring an Ethernet service activation test in a Layer 3 scenario to check whether network performance meets SLAs.

Networking Requirements

On the network shown in Figure 18-111, it is required that Ethernet frame transmission between DeviceB and DeviceC be checked to determine whether the performance parameters meet SLAs.

Figure 18-111 Configuring an Ethernet service activation test in a Layer 3 scenario
  • In this example, interfaces 1 and 2 represent GE0/1/1 and GE0/1/2, respectively.

Configuration Roadmap

  1. Configure DeviceC as the reflector and configure it to perform flow-based traffic filtering.
  2. Configure DeviceB as the initiator and configure it to perform configuration and performance tests.

Data Preparation

To complete the configuration, you need the following data:

  • Configurations on the reflector (DeviceC)
    • Service flow configurations and characteristics:
      • Destination MAC address: 00e0-fc12-3459 (MAC address of GE0/1/1 on DeviceD connected to UNI B)

      • Source MAC address: 00e0-fc12-3458 (MAC address of UNI B)

      • Destination IP address: IP address (10.1.3.2 used as an example) of the downstream device or an IP address on the network segment where UNI B resides

      • Source IP address: IP address (10.1.1.1 used as an example) of the downstream device or an IP address on the network segment where UNI A resides

  • Configurations of the initiator (DeviceB)

    • Service flow configurations and characteristics:

      • Destination MAC address: 00e0-fc12-3457 (MAC address of UNI A)

      • Source MAC address: 00e0-fc12-3456 (MAC address of GE0/1/1 on DeviceA connected to UNI A)
      • Destination IP address: IP address (10.1.3.2 used as an example) of the downstream device or an IP address on the network segment where UNI B resides

      • Source IP address: IP address (10.1.1.1 used as an example) of the downstream device or an IP address on the network segment where UNI A resides

    • Bandwidth profile: 500000 Kbit/s for the CIR and 20000 kbit/s for the EIR

    • Service acceptance criteria: 1000/100000 for the FLR, 1 microsecond for the FTD, and 10000000 microseconds for the FDV

The link between the two user networks must be reachable. Otherwise, static ARP entries must be configured.

Procedure

  1. Configure Layer 3 link reachability for the initiator and reflector.
  2. Configure the reflector.

    [*DeviceC] nqa test-flow 1
    [*DeviceC-nqa-testflow-1] vlan 10
    [*DeviceC-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3459
    [*DeviceC-nqa-testflow-1] traffic-type mac source 00e0-fc12-3458
    [*DeviceC-nqa-testflow-1] traffic-type ipv4 destination 10.1.3.2
    [*DeviceC-nqa-testflow-1] traffic-type ipv4 source 10.1.1.1
    [*DeviceC-nqa-testflow-1] traffic-policing test enable
    [*DeviceC-nqa-testflow-1] quit
    [*DeviceC] nqa reflector 1 interface GigabitEthernet 0/1/1.1 test-flow 1 exchange-port agetime 0
    [*DeviceC] commit

  3. Configure the initiator to perform configuration and performance tests and check the test results.

    [*DeviceB] nqa test-flow 1
    [*DeviceB-nqa-testflow-1] vlan 10
    [*DeviceB-nqa-testflow-1] bandwidth cir 500000 eir 20000
    [*DeviceB-nqa-testflow-1] sac flr 1000 ftd 10000 fdv 10000000
    [*DeviceB-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
    [*DeviceB-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
    [*DeviceB-nqa-testflow-1] traffic-type ipv4 destination 10.1.3.2
    [*DeviceB-nqa-testflow-1] traffic-type ipv4 source 10.1.1.1
    [*DeviceB-nqa-testflow-1] traffic-policing test enable
    [*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
    [*DeviceB-nqa-testflow-1] quit
    [*DeviceB] nqa test-instance admin ethernet
    [*DeviceB-nqa-admin-ethernet] test-type ethernet-service
    [*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 0/1/1.1
    [*DeviceB-nqa-admin-ethernet] test-flow 1
    [*DeviceB-nqa-admin-ethernet] start now
    [*DeviceB-nqa-admin-ethernet] commit
    [~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
    
    NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
      1 . Test 1 result   The test is finished                                      
       Status           : Pass                                                      
       Test-flow number : 1                                                         
       Mode             : Round-trip                                                
       Last step        : Performance-test                                          
       Estimated total time  :6
       Real test time        :6
       1 . Configuration-test                                                       
        Test-flow 1, CIR simple test                                                
         Begin                    : 2014-06-25 16:22:45.8                           
         End                      : 2014-06-25 16:22:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9961/10075/10012                                
         Min/Max/Mean FTD(us)     : 99/111/104                                      
         Min/Max/Mean FDV(us)     : 0/7/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Green                                            
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9979/10054/10012                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/10/3                                          
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Yellow                                           
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)    : 9979/10057/10013                                
         Min/Max/Mean FTD(us)     : 98/111/104                                      
         Min/Max/Mean FDV(us)     : 1/11/5                                          
         FL Count/FLR             : 0/0.000%                                        
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%
        Test-flow 1, Traffic policing test, Green                                   
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 10039/10054/10045                               
         Min/Max/Mean FTD(us)     : 96/110/104                                      
         Min/Max/Mean FDV(us)     : 1/9/4                                           
         FL Count/FLR             : 0/0.000% 
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                       
        Test-flow 1, Traffic policing test, Yellow                                  
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)    : 12544/12566/12554                               
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 1/8/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
       2 . Performance-test                                                         
        Test-flow 1, Performance-test                                               
         Begin                    : 2014-06-25 16:24:15.8                           
         End                      : 2014-06-25 16:39:15.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)    : 9888/10132/10004                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/8/2                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%

Configuration Files

  • DeviceB configuration file

    #
    sysname DeviceB
    # 
    interface GigabitEthernet 0/1/1
     undo shutdown
    #
    interface GigabitEthernet 0/1/1.1
     vlan-type dot1q 10
     ip address 10.1.1.2 255.255.255.0
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
    #
    nqa test-flow 1
     vlan 10
     bandwidth cir 500000 eir 20000
     sac flr 1000 ftd 10000 fdv 10000000
     traffic-type mac destination 00e0-fc12-3457
     traffic-type mac source 00e0-fc12-3456
     traffic-type ipv4 destination 10.1.3.2 
     traffic-type ipv4 source 10.1.1.1 
     traffic-policing test enable
     color-mode 8021p green 0 7 yellow 0 7
    #
    nqa test-instance admin ethernet
     test-type ethernet-service
     forwarding-simulation inbound-interface GigabitEthernet 0/1/1.1
     test-flow 1
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    # 
    interface GigabitEthernet 0/1/1
     undo shutdown
    #
    interface GigabitEthernet 0/1/1.1
     vlan-type dot1q 10
     ip address 10.1.3.1 255.255.255.0
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
    #
    nqa test-flow 1
     vlan 10
     traffic-type mac destination 00e0-fc12-3459
     traffic-type mac source 00e0-fc12-3458
     traffic-type ipv4 destination 10.1.3.2 
     traffic-type ipv4 source 10.1.1.1 
     traffic-policing test enable
    #
    nqa reflector 1 interface GigabitEthernet 0/1/1.1 test-flow 1 exchange-port agetime 0
    #
    return

Example for Configuring an Ethernet Service Activation Test on an EVPN VXLAN (Y.1564)

This section provides an example for configuring an Ethernet service activation test on an EVPN VXLAN to check whether network performance meets Service Level Agreement(SLA) requirements.

Networking Requirements

After network deployment is complete and before services are provisioned, you can configure an Ethernet service activation test to evaluate the network performance. This provides necessary data support for business planning and service promotion.

On the network shown in Figure 18-112, the EVPN VXLAN network to be tested is located between user-network interfaces (UNIs). DeviceB is configured as the initiator, and DeviceC is configured as the reflector to check whether Ethernet frame transmission performance between them meets SLA requirements.

Figure 18-112 Ethernet service activation test on an EVPN VXLAN

In this example, the destination MAC address is specified in a test instance to check the network performance between CEs on both ends of a Layer 2 EVPN. In a Layer 3 scenario, the destination IP address must be specified.

In this example, interfaces 1 and 2 represent GE0/1/0 and GE0/2/0, respectively.


Configuration Roadmap

The roadmap is as follows:

  1. Configure a VXLAN network.

  2. Configure DeviceA to communicate with DeviceB, and DeviceC to communicate with DeviceD.

  3. Configure DeviceC as the reflector to reflect service traffic.

  4. Configure DeviceB as the initiator to send simulated service traffic.

Data Preparation

To complete the configuration, you need the following data:

  • IP addresses of interconnected interfaces of devices

  • Service flow characteristics:

    • Destination MAC address: 00e0-fc12-3467 (MAC address of GE0/2/0 on DeviceD)
    • Source MAC address: 00e0-fc12-3465 (MAC address of GE0/2/0 on DeviceA)
    • VLAN ID carried in Ethernet frames: 10
    • UDP destination port number: 1234
    • UDP source port number: 5678
  • Bandwidth profile: 10000 kbit/s for both the CIR and EIR

  • Service acceptance criteria: 1000/100000 for the FLR and 1000 microseconds for both the FTD and FDV

Procedure

  1. Assign IP addresses to node interfaces, including loopback interfaces.

    For configuration details, see Configuration Files.

  2. Configure an IGP on the backbone network. In this example, OSPF is used.

    For configuration details, see Configuration Files.

  3. Configure a VXLAN tunnel between DeviceB and DeviceC.

    For details about the configuration roadmap, see VXLAN Configuration. For configuration details, see Configuration Files.

    After a VXLAN tunnel is established, you can run the display vxlan tunnel command on DeviceB or DeviceC to view VXLAN tunnel information. The command output on DeviceB is used as an example.

    [~DeviceB] display vxlan tunnel
    Number of vxlan tunnel : 1
    Tunnel ID   Source                Destination           State  Type     Uptime
    -----------------------------------------------------------------------------------
    4026531841  1.1.1.1               2.2.2.2               up     dynamic  00:12:56  

  4. Configure communication between DeviceA and DeviceB, and between DeviceC and Device D.

    # Configure DeviceB.

    [~DeviceB] interface gigabitethernet 0/2/0.1 mode l2
    [*DeviceB-GigabitEthernet0/2/0.1] encapsulation dot1q vid 10
    [*DeviceB-GigabitEthernet0/2/0.1] bridge-domain 10
    [*DeviceB-GigabitEthernet0/2/0.1] commit
    [~DeviceB-GigabitEthernet0/2/0.1] quit

    The configuration of DeviceC is similar to the configuration of DeviceB. For configuration details, see Configuration Files.

    # Configure DeviceA.

    <HUAWEI> system-view
    [~HUAWEI] sysname DeviceA
    [*HUAWEI] commit
    [~DeviceA] interface gigabitethernet 0/2/0.1
    [*DeviceA-GigabitEthernet0/2/0.1] ip address 10.100.0.1 24
    [*DeviceA-GigabitEthernet0/2/0.1] vlan-type dot1q 10
    [*DeviceA-GigabitEthernet0/2/0.1] commit
    [~DeviceA-GigabitEthernet0/2/0.1] quit

    The configuration of DeviceD is similar to the configuration of DeviceA. For configuration details, see Configuration Files.

  5. Configure DeviceC as the reflector to reflect service traffic.

    [~DeviceC] nqa test-flow 1
    [*DeviceC-nqa-testflow-1] vlan 10
    [*DeviceC-nqa-testflow-1] udp destination-port 1234
    [*DeviceC-nqa-testflow-1] udp source-port 5678
    [*DeviceC-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3467
    [*DeviceC-nqa-testflow-1] traffic-type mac source 00e0-fc12-3465
    [*DeviceC-nqa-testflow-1] quit
    [*DeviceC] nqa reflector 1 interface GigabitEthernet 0/2/0.1 test-flow 1 exchange-port agetime 0
    [*DeviceC] commit

  6. Configure DeviceB as the initiator to send simulated service traffic.

    [~DeviceB] nqa test-flow 1
    [*DeviceB-nqa-testflow-1] vlan 10
    [*DeviceB-nqa-testflow-1] udp destination-port 1234
    [*DeviceB-nqa-testflow-1] udp source-port 5678
    [*DeviceB-nqa-testflow-1] cir simple-test enable
    [*DeviceB-nqa-testflow-1] bandwidth cir 10000 eir 10000
    [*DeviceB-nqa-testflow-1] sac flr 1000 ftd 1000 fdv 1000
    [*DeviceB-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3467
    [*DeviceB-nqa-testflow-1] traffic-type mac source 00e0-fc12-3465
    [*DeviceB-nqa-testflow-1] traffic-policing test enable
    [*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
    [*DeviceB-nqa-testflow-1] commit
    [~DeviceB-nqa-testflow-1] quit

  7. Start the Ethernet service activation test.

    [~DeviceB] nqa test-instance admin ethernet
    [*DeviceB-nqa-admin-ethernet] test-type ethernet-service
    [*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 0/2/0.1
    [*DeviceB-nqa-admin-ethernet] test-flow 1
    [*DeviceB-nqa-admin-ethernet] start now
    [*DeviceB-nqa-admin-ethernet] commit

  8. Verify the configuration.

    Run the display nqa results test-instance admin ethernet command on DeviceB. The command output shows that the test status is Pass, indicating that the test is successful.

    [~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
    NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
      1 . Test 1 result   The test is finished                                      
       Status           : Pass                                                      
       Test-flow number : 1                                                         
       Mode             : Round-trip                                                
       Last step        : Performance-test                                          
       Estimated total time  :6
       Real test time        :6
       1 . Configuration-test                                                       
        Test-flow 1, CIR simple test                                                
         Begin                    : 2014-06-25 16:22:45.8                           
         End                      : 2014-06-25 16:22:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)  : 9961/10075/10012                                
         Min/Max/Mean FTD(us)     : 99/111/104                                      
         Min/Max/Mean FDV(us)     : 0/7/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Green                                            
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)  : 9979/10054/10012                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/10/3                                          
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
        Test-flow 1, CIR/EIR test, Yellow                                           
         Begin                    : 2014-06-25 16:23:15.8                           
         End                      : 2014-06-25 16:23:18.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)  : 9979/10057/10013                                
         Min/Max/Mean FTD(us)     : 98/111/104                                      
         Min/Max/Mean FDV(us)     : 1/11/5                                          
         FL Count/FLR             : 0/0.000%                                        
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%
        Test-flow 1, Traffic policing test, Green                                   
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)  : 10039/10054/10045                               
         Min/Max/Mean FTD(us)     : 96/110/104                                      
         Min/Max/Mean FDV(us)     : 1/9/4                                           
         FL Count/FLR             : 0/0.000% 
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                       
        Test-flow 1, Traffic policing test, Yellow                                  
         Begin                    : 2014-06-25 16:23:45.8                           
         End                      : 2014-06-25 16:23:48.8                           
         Status                   : --                                              
         Min/Max/Mean IR(kbit/s)  : 12544/12566/12554                               
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 1/8/3                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%                                        
       2 . Performance-test                                                         
        Test-flow 1, Performance-test                                               
         Begin                    : 2014-06-25 16:24:15.8                           
         End                      : 2014-06-25 16:39:15.8                           
         Status                   : Pass                                            
         Min/Max/Mean IR(kbit/s)  : 9888/10132/10004                                
         Min/Max/Mean FTD(us)     : 101/111/105                                     
         Min/Max/Mean FDV(us)     : 0/8/2                                           
         FL Count/FLR             : 0/0.000%
         Disorder packets         : 0
         Unavail Count/AVAIL      : 0/0.000%

Configuration Files

  • DeviceA configuration file

    #
    sysname DeviceA
    #
    interface GigabitEthernet0/2/0
     undo shutdown
    #
    interface GigabitEthernet0/2/0.1
     vlan-type dot1q 10
     ip address 10.100.0.1 255.255.255.0
    #
    ospf 1
     import-route direct 
     area 0.0.0.0
      network 10.100.0.0 0.0.0.255
    #
    return
  • DeviceB configuration file

    #
    sysname DeviceB
    #
    evpn vpn-instance evpna bd-mode
     route-distinguisher 1:1
     apply-label per-instance
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance evpna
     ipv4-family
      route-distinguisher 1:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
    vxlan vni 100
    #
    bridge-domain 10
     vxlan vni 1 split-horizon-mode
     evpn binding vpn-instance evpna
    #
    interface Vbdif10
     ip binding vpn-instance evpna
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.0.0.1 255.255.255.0
    #
    interface GigabitEthernet0/2/0
     undo shutdown
    #
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #
    interface Nve1
     source 1.1.1.1
     vni 1 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise irb
      peer 2.2.2.2 advertise encap-type vxlan
    #
    ospf 1
     import-route direct 
     area 0.0.0.0
      network 1.1.1.1 0.0.0.0
      network 10.0.0.0 0.0.0.255
    #
    nqa test-flow 1
     vlan 10
     udp destination-port 1234
     udp source-port 5678
     cir simple-test enable
     bandwidth cir 10000 eir 10000
     sac flr 1000 ftd 1000 fdv 1000
     traffic-type mac destination 00e0-fc12-3467
     traffic-type mac source 00e0-fc12-3465
     traffic-policing test enable
     color-mode 8021p green 0 7 yellow 0 7
    #
    nqa test-instance admin ethernet
     test-type ethernet-service
     forwarding-simulation inbound-interface GigabitEthernet 0/2/0.1
     test-flow 1
    #
    return
  • DeviceC configuration file

    #
    sysname DeviceC
    #
    evpn vpn-instance evpna bd-mode
     route-distinguisher 1:1
     apply-label per-instance
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance evpna
     ipv4-family
      route-distinguisher 1:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
    vxlan vni 100
    #
    bridge-domain 10
     vxlan vni 1 split-horizon-mode
     evpn binding vpn-instance evpna
    #
    interface Vbdif10
     ip binding vpn-instance evpna
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 10.0.0.2 255.255.255.0
    #
    interface GigabitEthernet0/2/0
     undo shutdown
    #
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    interface Nve1
     source 2.2.2.2
     vni 1 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise irb
      peer 1.1.1.1 advertise encap-type vxlan
    #
    ospf 1
     import-route direct 
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 10.0.0.0 0.0.0.255
    #
    nqa test-flow 1
     vlan 10
     udp destination-port 1234
     udp source-port 5678
     traffic-type mac destination 00e0-fc12-3467
     traffic-type mac source 00e0-fc12-3465
    #
    nqa reflector 1 interface GigabitEthernet 0/2/0.1 test-flow 1 exchange-port agetime 0
    #
    return
  • DeviceD configuration file

    #
    sysname DeviceD
    #
    interface GigabitEthernet0/2/0
     undo shutdown
    #
    interface GigabitEthernet0/2/0.1
     vlan-type dot1q 10
     ip address 10.100.0.2 255.255.255.0
    #
    ospf 1
     import-route direct 
     area 0.0.0.0
      network 10.100.0.0 0.0.0.255
    #
    return

Example for Configuring the Device to Send Test Results to the SFTP Server

Delivering test results to the SFTP server can save the test results to the maximum extent.

Networking Requirements

On the network shown in Figure 18-113, DeviceA serves as the client to perform an ICMP test and send test results to the SFTP server through SFTP.

  • In this example, interfaces 1 and 2 represent GE0/1/0 and GE0/2/0, respectively.
Figure 18-113 Configuring the device to send test results to the SFTP server

Configuration Roadmap

The configuration roadmap is as follows:

  1. Set parameters for configuring the device to send test results to the SFTP server.

  2. Start a test instance.

  3. Verify the configuration.

Data Preparation

To complete the configuration, you need the following data:

  • IP address of the SFTP server
  • Username and password used for logging in to the SFTP server

  • Name of a file in which test results are saved through SFTP

  • Interval at which test results are uploaded through FTP

Procedure

  1. Set parameters for configuring the device to send test results to the SFTP server.

    <DeviceA> system-view
    [~DeviceA] nqa upload test-type icmp sftp ipv4 10.1.2.8 file-name test1 port 21 username sftp password YsHsjx_202206 interval 600 retry 3
    [*DeviceA] commit

  2. Start a test instance.

    [~DeviceA] nqa test-instance admin icmp
    [*DeviceA] test-type icmp
    [*DeviceA] destination-address ipv4 10.1.1.10
    [*DeviceA-admin-icmp] start now
    [*DeviceA-admin-icmp] commit
    [~DeviceA-admin-icmp] quit

  3. Verify the configuration.

    # Display information about the files that are being uploaded and the files that have been uploaded.

    [~DeviceA] display nqa upload file-info
    The total number of upload file records is : 2                                  
    ---------------------------------------------------------------                 
    FileName   : NQA_38ba47987301_icmp_20230814112319701_test1.xml                  
    Status     : Upload success                                                     
    RetryTimes : 3
    Protocol   : sftp
    UploadTime : 2023-08-14 11:23:21.697                                            
                                                                                    
    ---------------------------------------------------------------
    FileName   : NQA_38ba47987301_icmp_20230814112421710_test1.xml                  
    Status     : Uploading                                                          
    RetryTimes : 3
    Protocol   : sftp
    UploadTime : --                                                                 

Configuration Files

  • DeviceA configuration file

    #
     sysname DeviceA
    #
    interface GigabitEthernet 0/1/0
     ip address 10.1.1.11 255.255.255.0
    #
    interface GigabitEthernet 0/2/0
     ip address 10.1.2.1 255.255.255.0
    #
    nqa upload test-type icmp sftp ipv4 10.1.2.8 file-name test1 port 21 username sftp password %^%#`P'|9L1x62lN*b+C~wMTT|$EA7+z0XOFC_,B$M+"%^%# interval 600 retry 3
     nqa test-instance admin icmp
     test-type icmp
     destination-address ipv4 10.1.1.10
     start now
    #
    return
  • DeviceB configuration file

    #
     sysname DeviceB
    #
    interface GigabitEthernet0/1/0
     ip address 10.1.1.10 255.255.255.0
    #
    return
Translation
Favorite
Download
Update Date:2024-07-12
Document ID:EDOC1100367115
Views:72343
Downloads:685
Average rating:0.0Points

Digital Signature File

digtal sigature tool