No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - Network Management and Monitoring

CloudEngine 8800, 7800, 6800, and 5800 V200R005C10

This document describes the configurations of Network Management and Monitoring, including SNMP, RMON, NETCONF, OpenFlow, LLDP, NQA, Mirroring, Packet Capture, Packet Trace, Path and Connectivity Detection Configuration, NetStream, sFlow, and iPCA.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Example for Deploying VXLAN Layer 2 Gateway and BFD for VXLAN Through the VMware NSX Controller

Example for Deploying VXLAN Layer 2 Gateway and BFD for VXLAN Through the VMware NSX Controller

Networking Requirements

As shown in Figure 6-8, the enterprise data center network includes multiple vendors' devices, most of which are VMware devices. The customer wants to use the VMware NSX controller to uniformly manage and deploy the overlay network of the data center. The VMware NSX OVS can serve as a virtual tunnel end point (VTEP) and can be managed by the VMware NSX controller. However, bare metal servers cannot be managed by the VMware NSX controller. To solve the problem, these bare metal servers can be connected through the CE switch to the overlay network deployed by the VMware NSX controller. When a lot of VMs exist, centralized replication can reduce pressures on the gateway. A bidirectional forwarding detection (BFD) check mechanism is deployed on the virtual extensible LAN (VXLAN) tunnel between the Leaf4 stack system and the replicator to improve network reliability. When the replicator is faulty, the VXLAN gateway can identify the fault rapidly and switch traffic to another normal replicator, avoiding traffic loss due to slow fault perception.

Figure 6-8 Overlay network deployed by the VMware NSX controller

Configuration Roadmap

The following roadmap is used to deploy the VXLAN Layer 2 gateway and BFD for VXLAN through the VMware NSX controller:
  1. Use the Leaf4-1 device and the Leaf4-2 device to form a Leaf4 stack system.
  2. Configure the IP addresses of ports in the Leaf4 stack system and release routes through OSPF.
  3. Configure the Eth-Trunk interface in the Leaf4 stack system.
  4. Configure the SSL policy in the Leaf4 stack system.
  5. Create the NVE interface and configure the IP address of the source VTEP in the Leaf4 stack system.
  6. Configure the open vSwitch database (OVSDB) function in the Leaf4 stack system.
  7. Use the VMware NSX controller to deploy the Leaf4 stack system to support the VXLAN Layer 2 gateway and BFD for VXLAN.

Data Plan

The following data is needed to complete the configuration:
  • On the Leaf4 stack system, stack port 1/1: service ports 10GE1/0/3-10GE1/0/6 of the Leaf4-1 and Leaf4-2 devices.
  • VLAN to which the VM (connected to the VXLAN network through the Leaf4 stack system) belongs: VLAN 10
  • IP addresses of interconnection ports on the device
  • Route type: OSPF
  • VXLAN network identifier (VNI) ID: 5000
  • Bridge domain (BD) ID: 5000 When the CE switch VXLAN is deployed through the VMware NSX controller, a BD is automatically created and the BD ID is consistent with the VNI ID.
  • IP address of the VMware NSX controller: 192.168.60.240; VTEP IP address of server 1: 192.168.40.12; VTEP IP address of replicator 1:192.168.40.13; VTEP IP address of replicator 2: 192.168.40.14

Procedure

  1. Use the Leaf4-1 device and the Leaf4-2 device to form a Leaf4 stack system.

    # Configure the stack priority of the Leaf4-1 device to 150 and the domain ID to 10. By default, the stack member ID of the device is 1. The stack member ID of the Leaf4-1 device is 1 (default) and no configuration is required.

    <HUAWEI> system-view
    [~HUAWEI] sysname Leaf4-1
    [*HUAWEI] commit
    [~Leaf4-1] stack
    [~Leaf4-1-stack] stack member 1 priority 150
    [*Leaf4-1-stack] stack member 1 domain 10
    [*Leaf4-1-stack] quit
    [*Leaf4-1] commit

    # Configure the stack member ID of the Leaf4-2 device to 2, the priority to 120, and the domain ID to 10.

    <HUAWEI> system-view
    [~HUAWEI] sysname Leaf4-2
    [*HUAWEI] commit
    [~Leaf4-2] stack
    [~Leaf4-2-stack] stack member 1 priority 120
    [*Leaf4-2-stack] stack member 1 domain 10
    [*Leaf4-2-stack] stack member 1 renumber 2 inherit-config
    Warning: The stack configuration of member ID 1 will be inherited to member ID 2 after the device resets. Continue? [Y/N]: y
    [*Leaf4-2-stack] quit
    [*Leaf4-2] commit

    # Add service ports 10GE1/0/3-10GE1/0/6 of the Leaf4-1 and Leaf4-2 devices to stack port 1/1. The configuration of the Leaf4-2 device is similar to that of the Leaf4-1 device.

    [~Leaf4-1] interface stack-port 1/1
    [*Leaf4-1-Stack-Port1/1] port member-group interface 10ge 1/0/3 to 1/0/6
    Warning: After the configuration is complete,
    1.The interface(s) (10GE1/0/3-1/0/6) will be converted to stack mode and be configured with the port crc-statistics trigger error-down command if the configuration does not exist. 
    2.The interface(s) may go Error-Down (crc-statistics) because there is no shutdown configuration on the interfaces.Continue? [Y/N]: y
    [*Leaf4-1-Stack-Port1/1] commit
    [~Leaf4-1-Stack-Port1/1] return

    # After the preceding configurations, run the display stack configuration command to check whether the configurations are consistent with the plan. If they are inconsistent, change the configurations. In this example, query the configurations of the Leaf4-1 device.

    <~Leaf4-1> display stack configuration
    Oper          : Operation
    Conf          : Configuration
    *             : Offline configuration
    Isolated Port : The port is in stack mode, but does not belong to any Stack-Port
    
    Attribute Configuration:
    ----------------------------------------------------------------------------
     MemberID      Domain         Priority       Switch Mode     Uplink Port
    Oper(Conf)   Oper(Conf)      Oper(Conf)      Oper(Conf)      Oper(Conf)
    ----------------------------------------------------------------------------
    1(1)         --(10)          100(150)        Auto(Auto)      6*40GE(6*40GE)
    ----------------------------------------------------------------------------
    
    Stack-Port Configuration:
    --------------------------------------------------------------------------------
    Stack-Port      Member Ports
    --------------------------------------------------------------------------------
    Stack-Port1/1   10GE1/0/3           10GE1/0/4           10GE1/0/5
                    10GE1/0/6
    --------------------------------------------------------------------------------

    # Save the configurations of the Leaf4-1 and Leaf4-2 devices and then power them off. The configuration of the Leaf4-2 device is similar to that of the Leaf4-1 device.

    <~Leaf4-1> save
    Warning: The current configuration will be written to the device. Continue? [Y/N]: y

    # Log in to the stack system through the console port or management port and run the display stack command to check whether a stack system is successfully established. In the case of a login through the management port, the IP address of the primary switch is required.

    <~Leaf4-1> display stack
    --------------------------------------------------------------------------------
    MemberID Role     MAC              Priority   DeviceType         Description
    --------------------------------------------------------------------------------
    +1       Master   0004-9f31-d520   150        CE6850-48S6Q-HI 
     2       Standby  0004-9f62-1f40   120        CE6850-48S6Q-HI 
    --------------------------------------------------------------------------------
    + indicates the device where the activated management interface resides.
    

    As shown in the preceding, the information about two switches is displayed, indicating that a stack system is successfully established. The primary switch is the device with member ID 1, that is, Leaf4-1.

  2. Configure the IP addresses of ports in the Leaf4 stack system and release routes through OSPF.

    <~Leaf4-1> system-view
    [~Leaf4-1] interface loopback 1
    [*Leaf4-1-LoopBack1] ip address 1.1.1.1 32
    [*Leaf4-1-LoopBack1] quit
    [*Leaf4-1] interface loopback 2
    [*Leaf4-1-LoopBack2] ip address 2.2.2.2 32
    [*Leaf4-1-LoopBack2] quit
    [*Leaf4-1] ospf
    [*Leaf4-1-ospf-1] area 0
    [*Leaf4-1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
    [*Leaf4-1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
    [*Leaf4-1-ospf-1-area-0.0.0.0] quit
    [*Leaf4-1-ospf-1] quit
    [*Leaf4-1] commit

    After successful OSPF configuration, the Leaf4 stack system, VMware NSX controller, and other servers can ping each other.

  3. Configure the Eth-Trunk interface in the Leaf4 stack system.

    [~Leaf4-1] interface eth-trunk 100
    [*Leaf4-1-Eth-Trunk100] trunkport 10ge 1/0/2
    [*Leaf4-1-Eth-Trunk100] trunkport 10ge 2/0/2
    [*Leaf4-1-Eth-Trunk100] quit
    [*Leaf4-1-Eth-Trunk100] commit

  4. Configure the SSL policy in the Leaf4 stack system.
    1. Upload the SSL certificate to the flash:/security directory of the Leaf4-1 device.
    2. Configure the SSL policy.

      [~Leaf4-1] ssl policy nsx
      [*Leaf4-1-ssl-policy-nsx] certificate load pem-cert vtep-cert.pem key-pair rsa key-file vtep-privkey.pem auth-code cipher 123456
      [*Leaf4-1-ssl-policy-nsx] quit
      [*Leaf4-1] commit

  5. Create the NVE interface and configure the IP address of the source VTEP in the Leaf4 stack system.

    NOTE:

    When the VXLAN network is deployed through the VMware NSX controller, only NVE interface 1 can be created to deploy VXLAN tunnel-related information.

    [~Leaf4-1] interface nve 1
    [*Leaf4-1-Nve1] source 1.1.1.1
    [*Leaf4-1-Nve1] quit
    [*Leaf4-1] commit

  6. Configure the OVSDB function in the Leaf4 stack system.

    # Configure the OVSDB function.

    [~Leaf4-1] ovsdb server
    [*Leaf4-1-ovsdb-server] source ip 2.2.2.2
    [*Leaf4-1-ovsdb-server] controller ip 192.168.60.240
    [*Leaf4-1-ovsdb-server] ssl ssl-policy nsx
    [*Leaf4-1-ovsdb-server] smooth enable
    [*Leaf4-1-ovsdb-server] ovsdb server enable
    [*Leaf4-1-ovsdb-server] quit
    [*Leaf4-1] commit

  7. Use the VMware NSX controller to deploy the Leaf4 stack system to support the VXLAN Layer 2 gateway and BFD for VXLAN.

    # Add a logical switch.

    1. Choose Networking & Security > Logical Switches and select NSX Manager with address 192.168.60.240.

    2. Click . The New Logical Switch dialog box is displayed.

      Set Name to Leaf4_vni5000, TransportZone to UCAST-ZONE, and Replication mode to Unicast, and select Enable IP Discovery. Confirm the setting, and click OK.

    # Add a hardware device.

    1. Choose Networking & Security > Service Definitions > Hardware Devices and select NSX Manager with address 192.168.60.240.

    2. Click to add the hardware device, configure the device name, add the public key, and select Enable BFD.

      Method to obtain the public key:

      Open the vtep-cert.pem public key using the notepad. Copy the contents from BEGIN CERTIFICATE to END CERTIFICATE (including BEGIN CERTIFICATE and END CERTIFICATE) to the certificate.

    3. Confirm the setting, and click OK. Then, the system shows that the hardware device is connected to the NSX controller, but the logical switch is not added yet.

    # Bind the Leaf4 interface to the logical switch.

    1. Choose Networking & Security > Logical Switches and select NSX Manager with address 192.168.60.240. Select the logical switch with segment ID 5000, right-click, and choose Manage Hardware Bindings. Then, the Leaf4_vni5000–Manage Hardware Bindings dialog box is displayed.

    2. Click . The hardware device added is displayed.

    3. Click Select, select port Eth-Trunk100, and set VLAN to 10. Confirm the setting, and click OK.

      NOTE:

      If the server is connected to the network through a VLAN, you need to set the VLAN ID based on actual situations. If the server is directly connected to the network without using VLANs, you need to set the VLAN ID to 0.

      When the CE switch is interconnected with the VMware NSX controller, VLAN 4094 cannot be configured. During network planning, VLAN 4094 cannot be selected.

      The VLAN configured must be different from the allowed VLAN of the corresponding primary Layer 2 interface.

      If the VLAN ID is not 0, the encapsulation type is Dot1q. If the VLAN ID is 0, the encapsulation type is Untag.

    # Check whether the Leaf4 stack system successfully establishes the VXLAN tunnel with server 1, replicator 1, and replicator 2.

    When State is up, the VXLAN tunnel is successfully established.

    [~Leaf4-1] display vxlan tunnel
    2018-04-07 19:48:13.264
    Number of vxlan tunnel : 3
    Tunnel ID   Source                Destination           State  Type     Uptime
    -----------------------------------------------------------------------------------
    4026531841  2.2.2.2               192.168.40.12         up     static   03:52:21
    4026531842  2.2.2.2               192.168.40.13         up     static   03:52:21
    4026531843  2.2.2.2               192.168.40.14         up     static   03:52:21

    # Configure the centralized replication list for the VXLAN.

    1. Choose Networking & Security > Service Definitions > Hardware Devices and select NSX Manager with address 192.168.60.240.

    2. Select the added hardware switch. Choose Replication Cluster > Edit. The Edit Replication Cluster Configuration dialog box is displayed. Confirm the setting, and click OK.

    # Check BFD connection status.

    [~Leaf4-1] display bfd session all
    S: Static session
    D: Dynamic session
    IP: IP session
    IF: Single-hop session
    PEER: Multi-hop session
    LDP: LDP session
    LSP: Label switched path
    TE: Traffic Engineering
    AUTO: Automatically negotiated session
    VXLAN: VXLAN session
    (w): State in WTR
    (*): State is invalid
    Total UP/DOWN Session Number : 2/0
    --------------------------------------------------------------------------------
    Local      Remote     PeerIpAddr      State     Type        InterfaceName 
    --------------------------------------------------------------------------------
    16385      16385      192.168.40.13   Up        S/AUTO-VXLAN       - 
    16386      16385      192.168.40.14   Up        S/AUTO-VXLAN       - 
    --------------------------------------------------------------------------------

    # Check the centralized replication information about the VXLAN.

    [~Leaf4-1] display vxlan flood-vtep vni 5000
    Number of peers : 2
    Vni ID    Source             Destination       Type       Status
    ----------------------------------------------------------------------
    5000      1.1.1.1            192.168.40.13     static     primary          
    5000      1.1.1.1            192.168.40.14     static     backup   

Configuration File

Configuration file of the Leaf4 stack system

#
sysname Leaf4-1
#
ssl policy nsx
 certificate load pem-cert vtep-cert.pem key-pair rsa key-file vtep-privkey.pem auth-code cipher %^%#<`c/:cbTs/'sK\S+ct)8ia_d!Ukn|&7pOM!5|dT6%^%#
#
ovsdb server
 ssl ssl-policy nsx
 controller ip 192.168.60.240 port 6640 max-backoff 8000 inactivity-probe 60000
 source ip 2.2.2.2
 ovsdb server enable
 smooth enable
#
bridge-domain 5000
 ovsdb enable
 vxlan vni 5000
#
stack
 #
 stack member 1 domain 10
 stack member 1 priority 150
 #
 stack member 2 domain 10
 stack member 2 priority 120
#
interface Eth-Trunk100.1 mode l2
 encapsulation dot1q vid 10
 bridge-domain 5000
#
interface Stack-Port1/1
#
interface Stack-Port2/1
#
interface 10GE1/0/2
 eth-trunk 100 
#
interface 10GE1/0/3
 port mode stack
 stack-port 1/1
 port crc-statistics trigger error-down
#
interface 10GE1/0/4
 port mode stack
 stack-port 1/1
 port crc-statistics trigger error-down
#
interface 10GE1/0/5
 port mode stack
 stack-port 1/1
 port crc-statistics trigger error-down
#
interface 10GE1/0/6
 port mode stack
 stack-port 1/1
 port crc-statistics trigger error-down
#
interface 10GE2/0/2
 eth-trunk 100 
#
interface 10GE2/0/3
 port mode stack
 stack-port 2/1
 port crc-statistics trigger error-down
#
interface 10GE2/0/4
 port mode stack
 stack-port 2/1
 port crc-statistics trigger error-down
#
interface 10GE2/0/5
 port mode stack
 stack-port 2/1
 port crc-statistics trigger error-down
#
interface 10GE2/0/6
 port mode stack
 stack-port 2/1
 port crc-statistics trigger error-down
#
interface LoopBack1
 ip address 1.1.1.1 255.255.255.255
#
interface LoopBack2
 ip address 2.2.2.2 255.255.255.255
#
interface Nve1
 source 1.1.1.1
 vni 5000 head-end peer-list 192.168.40.12
 vni 5000 head-end peer-list 192.168.40.13
 vni 5000 head-end peer-list 192.168.40.14
 vni 5000 flood-vtep 192.168.40.13
 vni 5000 flood-vtep 192.168.40.14
#
bfd vxlanc0a83c0a bind vxlan peer-ip 192.168.40.13 source-ip 1.1.1.1 peer-mac 0000-0000-0000 auto
#
bfd vxlanc0a83c0b bind vxlan peer-ip 192.168.40.14 source-ip 1.1.1.1 peer-mac 0000-0000-0000 auto
#
ospf 1
 area 0.0.0.0
  network 1.1.1.1 0.0.0.0
  network 2.2.2.2 0.0.0.0
#
return
Translation
Download
Updated: 2019-04-20

Document ID: EDOC1100075365

Views: 41550

Downloads: 129

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next