No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - IP Multicast

S7700 and S9700 V200R010C00

This document describes IP multicast basics and how to configure IP multicast features, including IGMP/MLD, PIM (IPv4&IPv6), MSDP, multicast VPN, Layer 3 multicast CAC, Layer 2 multicast CAC, IGMP/MLD snooping, and multicast VLAN, IPv4&IPv6 multicast route management, static multicast MAC address, multicast network.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Key Processes for Multicast VPN Implementation

Key Processes for Multicast VPN Implementation

Multicast VPN implementation involves the following processes:

  • Share-MDT establishment on the public network

    After multicast VPN is configured, a Share-MDT is automatically established. VPN multicast packets are transmitted along this Share-MDT.

  • Encapsulation and decapsulation of VPN multicast packets on PE devices

    Upon receiving VPN multicast packets from a CE device, a PE device uses GRE to encapsulate the packets into public network data packets and forwards the encapsulated packet along the Share-MDT. Other PE devices decapsulate the packets and determine whether to forward these packets to connected CE devices.

  • Reverse path forwarding (RPF) checks on PE devices

    For CE and P devices, the RFP check mechanism does not change after multicast VPN is configured. That is, the RPF interface is the outbound interface of the unicast route to the multicast source address, and the RPF neighbor is the next hop of the unicast route. For PE devices, the RPF check mechanism used in the public network instance remains the same after multicast VPN is configured. In a VPN instance, the RPF interface and neighbor are defined based on the outbound interface type of the unicast route to the source address.

  • VPN multicast packet transmission on the public network

    The transmission procedure of VPN multicast packets varies depending on the PIM protocol used on the public network.

  • Switch-MDT switchover

    When multicast data packets sent from a multicast source are transmitted along a Share-MDT, all PE devices receive these packets regardless of whether there are receivers at the connected sites. When a Switch-MDT is used, only PE devices with receivers attached can receive data packets from the multicast source. This lowers burdens on PE devices.

Share-MDT Establishment

A Share-MDT uses a Share-Group address. A Share-Group address uniquely identifies a Share-MDT on a VPN. A public network can run either PIM-SM or PIM-DM, and the Share-MDT establishment process differs in the two PIM modes.

NOTE:
  • The multicast source address of a Share-MDT (MTI IP address) must be the IP address of the interface used to establish IBGP connections to other PEs. In most cases, the multicast source address of a Share-MDT is the IP address of a loopback interface.
  • The multicast group address of a Share-MDT (Share-Group address) must be defined before multicast VPN deployment. An MD must have the same group address on all PE devices, and different MDs must have different group addresses.
Establishing a Share-MDT on a PIM-SM Network

As shown in Figure 7-6, PIM-SM runs on the public network. The P device functions as the rendezvous point (RP).

Figure 7-6  Establishing a Share-MDT on a PIM-SM network

With Figure 7-6 as an example, the process for establishing a Share-MDT is as follows:

  1. PE1 sends a Join message to the RP on the public network through the public network instance. The Join message uses the Share-Group address as the multicast group address. PE devices that receive the Join message create the (*, 239.1.1.1) entry. PE2 and PE3 also send Join messages to the RP. An RPT is established in the MD, with the RP as the root and PE1, PE2, and PE3 as leaves.

  2. PE1 sends a Register message to the RP through the public network instance. In the Register message, the source address is the MTI address and the group address is the Share-Group address. The RP creates the (10.1.1.1, 239.1.1.1) entry after receiving the Register message. PE2 and PE3 also send Register messages to the RP. Three independent SPTs connecting PE devices to the RP are established in the MD.

On the PIM-SM network, an RPT (*, 239.1.1.1) and the three SPTs form a Share-MDT.

Establishing a Share-MDT on a PIM-DM Network

As shown in Figure 7-7, PIM-DM runs on the public network.

Figure 7-7  Establishing a Share-MDT on a PIM-DM network

The process for establishing a Share-MDT is as follows:

  1. PE1 initiates a flood-prune process in the public network instance, using IP address of the MTI (interface used for establishing IBGP peers) as the multicast source address, the Share-Group address as the multicast group address, and PE2 and PE3 as group members.
  2. PE devices along the path create the (10.1.1.1, 239.1.1.1). An SPT is established, with PE1 as the root, and PE2 and PE3 as leaves. PE2 and PE3 also initiate flood-prune processes, through which two SPTs are established.

On the PIM-DM network, these three independent SPTs form a Share-MDT.

Encapsulation and Decapsulation of VPN Multicast Packets

Multicast packets converted and transmitted during implementation of multicast VPN include the following:

  • Public network protocol packets: used to establish MDTs on the public network but not to encapsulate VPN packets.
  • Public network data packets: used to encapsulate VPN data and protocol packets, enabling VPN packets to be transparently transmitted in an MD.
  • VPN protocol packets: used to establish MDTs across the public network.
  • VPN data packets: used to carry VPN multicast data.
Encapsulation and Decapsulation Process

VPN packets, including protocol packets and data packets, are encapsulated into public network data packets using GRE. When a PE device receives VPN packets from a CE device, it encapsulates these packets into public network data packets using the BGP interface address as the source address and the Share-Group address as the group address, and then delivers the encapsulated packets to the public instance. All PE devices bound to the same VPN instance receive the public network data packets and decapsulate these packets. If a PE device is connected to a site with group members, it forwards the packets. If not, the PE discards the packets.

Figure 7-8 shows the process of encapsulating and decapsulating VPN data packets.

Figure 7-8  Encapsulation and decapsulation of VPN multicast packets

As shown in Figure 7-8, a VPN data packet encapsulation and decapsulation process follows these steps:

  1. CE1 sends a VPN data packet (192.168.1.1, 225.1.1.1).
  2. When PE1 receives the packet, it uses GRE to encapsulate the packet, with 10.1.1.1 as the source address and 239.1.1.1 as the group address. The packet then becomes a public network data packet (10.1.1.1, 239.1.1.1) and is forwarded on the public network.
  3. Upon receiving the packet, PE2 and PE3 decapsulate the packet into a VPN packet (192.168.1.1, 225.1.1.1).
  4. PE2 finds receivers at the VPN site connected to it and forwards the packet to CE2.
  5. The site connected to PE3 has no receivers, so PE3 drops the packet.

VPN protocol packets (such as Join messages) from CE2 are also encapsulated into public network data packets and transmitted on the public network.

RPF Checks on PE Devices

If multicast VPN is not configured, network devices select an optimal route to a multicast source from their routing table as the RPF route. The RPF route contains the RPF interface (outbound interface of the unicast route) and RPF neighbor (next hop of the unicast route). The RPF neighbor information is used to construct PIM Join/Prune messages, and the RFP interface information is used for RPF checks. During an RPF check, a multicast device checks whether the packet is received from the RPF interface. If so, the packet passes the RPF check and can be forwarded through the outbound interface. If not, the packet fails the RPF check and is dropped. For details about RPF checks, see RPF Check.

After multicast VPN is configured, CE and P devices cannot detect VPNs. The RPF check mechanism used on them remains unchanged. That is, the RPF interface is the outbound interface of a unicast route to the source address, and the RPF neighbor is the next hop of the unicast route.

For PE devices, the RPF check mechanism used in the public network instance remains unchanged after multicast VPN configuration. For VPN instances, RPF information needs to be redefined based on the outbound interface of a unicast route, so that VPN packets can pass RPF checks and multicast distribution trees (MDTs) can be established for VPN instances across the public network.

VPN Interface as the Outbound Interface of the Unicast Route

If the outbound interface of the unicast route to a multicast source is a VPN interface, RPF information remains the same after multicast VPN is configured.

As shown in Figure 7-9, PE1 has a unicast route to the multicast source 192.168.1.1/24, with VPN interface PE1-if1 as the outbound interface and the IP address of CE1-if1 as the next hop address. In this case, PE1-if1 is the RPF interface and CE1 is the RPF neighbor. When PE1 receive packets from 192.168.1.1 on PE1-if1, the packets pass the RPF check and are forwarded to the outbound interface.

Figure 7-9  RPF information when the outbound interface is a VPN interface

Public Network Interface as the Outbound Interface of the Unicast Route

When the outbound interface of the unicast route to a multicast source is a public network interface, the RPF interface is the MTI on the local PE. A remote PE meeting the following conditions is the RPF neighbor:

  • It is the next hop of the BGP route from the local PE to the multicast source.
  • It is the PIM neighbor of the local PE.
NOTE:

A neighbor relationship is established between PE devices through MTIs. The PIM neighbor address is the MTI IP address on the peer PE. Therefore, a PE becomes the RPF neighbor only when its MTI IP address is the same as that used for establishing IBGP connections.

As shown in Figure 7-10, PE2 has a route to the multicast source 192.168.1.1/24, with the outbound interface as public network interface PE2-if2, and the next hop as 10.1.1.1/24 (PE1). PE1 is next hop of the BGP route from PE2 to the multicast source and the PIM neighbor of PE2. Therefore, PE1 is the RPF neighbor of PE2. The MTI on PE2 is the RPF interface. When PE2 receives packets from 192.168.1.1 on the MTI, the packets pass the RPF check and are forwarded to the outbound interface.

Figure 7-10  RPF information when the outbound interface is a public network interface

VPN Multicast Packet Transmission on the Public Network

Packet transmission on the public network is transparent to VPN instances. A VPN instance only needs to send VPN packets from a multicast tunnel interface (MTI) to a remote PE. The process of forwarding packets through a multicast distribution tree (MDT) on the public network is complex. MDT transmission starts after a Share-MDT is established.

Transmission on a Share-MDT

Transmission on a Share-MDT follows these steps:

  1. Multicast packets are sent from a VPN instance to the MTI of a PE.

  2. The PE cannot identify whether these VPN multicast packets are protocol or data packets. It encapsulates the VPN multicast packets into public network multicast data packets, using the MTI address (interface address used for establishing the IBGP connection) as the source address and the Share-Group address as the group address.

  3. The PE forwards the encapsulated data packets to the public network instance. These packets are then sent to the public network through the public network instance.

  4. The public network multicast data packets are forwarded along the Share-MDT until they reach the public network instance on the remote PE.

  5. The remote PE decapsulates the public network multicast data packets into VPN multicast packets and forwards them to the VPN instance.

For multicast data transmission along a Switch-MDT, see Switch-MDT Switchover.

VPN Multicast Protocol Packet Transmission

When PIM-DM runs on a VPN network:

  • Hello messages are exchanged between MTIs to establish PIM neighbor relationships.

  • PE devices trigger flood-prune processes across the public network to establish a shortest path tree (SPT).

When PIM-SM runs on a VPN network:

  • Hello messages are exchanged between MTIs to establish PIM neighbor relationships.

  • If receivers and the VPN RP belong to different sites, PE devices send Join messages across the public network to create a rendezvous point tree (RPT).

  • If the multicast source and the VPN RP belong to different sites, PE devices send Register messages across the public network to create an SPT.

Figure 7-11 shows an example of transmitting multicast protocol packets along a Share-MDT. In this example, the public and VPN networks run PIM-SM, and receivers on the VPN send Join messages across the public network.

In Figure 7-11, Receiver on VPNA belongs to Site2 and is directly connected to CE2. CE1 is the RP of group G (225.1.1.1) and belongs to Site1.

Figure 7-11  Multicast protocol packet transmission

The process of transmitting multicast protocol packets along the Share-MDT is as follows:

  1. The receiver instructs CE2 to receive and forward data of group 225.1.1.1 using the Internet Group Management Protocol (IGMP). CE2 creates the (*, 225.1.1.1) entry and sends a Join message to the VPN RP (CE1).

  2. The VPN instance on PE2 receives the Join message sent by CE2, creates the (*, 225.1.1.1) entry, and specifies the RPF interface (MTI) as the upstream interface. The VPN instance on PE2 considers that the Join message has been sent from the MTI.

  3. Before sending the Join message to the P device, PE2 uses Generic Routing Encapsulation (GRE) to encapsulate the message, with the MTI address as the source address and the Share-Group address as the group address. The packet then becomes a public network multicast data packet (10.1.2.1, 239.1.1.1). PE2 forwards the multicast data packet to the public network instance.

  4. The multicast data packet (10.1.2.1, 239.1.1.1) is forwarded to the public network instance on each PE along the Share-MDT. Each PE decapsulates the packet to restore the Join message destined for the VPN RP. PE devices check the RP information carried in the Join message. PE1 finds that the RP belongs to the directly connected site and sends the Join message to the VPN instance. The VPN instance on PE1 considers that the message is obtained on the MTI. Then PE1 creates the (*, 225.1.1.1) entry, specifies the MTI as the downstream interface and the interface facing CE1 as the upstream interface, and sends the message to CE1. PE3 drops the Join message.

  5. When receiving the Join message, CE1 updates or creates the (*, 225.1.1.1) entry. A VPN multicast RPT is now established across the public network.

VPN Multicast Data Packet Transmission

When PIM-DM runs on a VPN, VPN multicast data packets are transmitted along a VPN SPT across the public network.

When PIM-SM runs on a VPN:

  • If receivers and the VPN RP belong to different sites, VPN multicast data packets are transmitted across the public network along the VPN RPT.

  • If the multicast source and receivers belong to different sites, the VPN multicast data packets are transmitted across the public network along the VPN SPT.

Figure 7-12 shows an example of transmitting VPN multicast data packets along a Share-MDT. In this example, the public and VPN networks run PIM-DM, and an SPT is used to transmit multicast data packets across the public network.

In Figure 7-12, the source on VPNA sends multicast data packets to group 225.1.1.1, and the receiver is directly connected to CE2 and belongs to Site2 of VPNA.

Figure 7-12  Multicast data packet transmission

The process of transmitting VPN multicast data packets along the Share-MDT is as follows:

  1. The source sends a VPN multicast data packet (192.168.1.1, 225.1.1.1) to CE1.

  2. CE1 forwards the packet to PE1 along the SPT. PE1 searches for the matching forwarding entry in the VPN instance. If the list of outbound interfaces in the forwarding entry contains the MTI, PE1 forwards the VPN multicast data to the P device for further processing. The VPN instance on PE1 considers that the VPN multicast data has been sent from the MTI.

  3. Before sending the VPN multicast data packet to the P device, PE1 uses GRE to encapsulate the packet, using the MTI address as the source address and the Share-Group address as the group address. The packet then becomes a public network multicast data packet (10.1.1.1, 239.1.1.1) and is forwarded to the public network instance.

  4. The multicast data packet is forwarded to the public network instance on each PE along the Share-MDT. Each PE decapsulates the packet to restore the VPN multicast data packet and sends the packet to the VPN instance. If a PE has a downstream interface of the SPT, the PE forwards the VPN multicast data packet. If a PE has no downstream interface of the SPT, the PE drops the packet.

  5. PE2 searches for the forwarding entry in the VPN instance and sends the VPN multicast data packet to the receiver. Transmission of this VPN multicast data packet is complete.

Switch-MDT Switchover

When multicast data packets of a VPN instance are transmitted along a Share-MDT, all PE devices bound to the VPN instance will receive these packets, regardless of whether receivers exist at the sites connected to the PE devices. When the rate of VPN multicast data packets is high, a flood of multicast data may occur on the public network. This consumes excessive network bandwidth and increases resource consumption on PE devices.

An optimized solution Switch-MDT was developed to solve this problem. This section describes the Switch-MDT implementation under an assumption that a Share-MDT has been established.

Switchover from Share-MDT to Switch-MDT
  1. You can configure a Switch-MDT switchover policy to specify that a Switch-MDT switchover is triggered when either or both of the following requirements are met:

    • VPN multicast data packets are permitted by advanced ACL rules. If you are aware that the rate of packets sent from a multicast source or to a multicast group is high, you can specify a group address or source address range for Switch-MDT forwarding in an advanced ACL rule.

    • The rate of VPN multicast data packets stays above the switchover threshold for a specified period of time.

      In some cases, the rate of VPN multicast data packets fluctuates above and below the switchover threshold. To prevent frequent switchovers between the Share-MDT and Switch-MDT, the system does not trigger a switchover immediately after detecting that multicast data traffic rate is lower than the switchover threshold. Instead, the system starts the Switch-Delay timer and continues detecting the traffic rate before timer expires. If the rate remains higher than the switchover threshold when the timer expires, the transmission path of multicast data packets is switched to the Switch-MDT. If the rate falls below the switchover threshold, packets are still forwarded along the Share-MDT.

  2. The source PE is assigned an unused Switch-Group address from the Switch-Group-Pool. It periodically sends switchover notification packets to downstream PE devices along the Share-MDT. A switchover notification packet carries the VPN multicast source address, VPN multicast group address, and Switch-Group address.
  3. When receiving a switchover notification packet, downstream PE devices check whether there are receivers at the connected sites. If receivers exist, they send a PIM Join message to join the Switch-MDT with the Switch-Group address as the group address and the source PE as the root. If no receiver exists, the PE devices cache the switchover notification packet and join the Switch-MDT when receivers appear.
  4. The source PE sends an MDT switchover packet and waits for the timeout period of the Switch-Delay timer. If the switchover conditions are still met when the Switch-Delay timer expires, the source PE starts to encapsulate VPN multicast data using the Switch-Group address. Multicast data packets are then transmitted along the Switch-MDT.

    The Switch-Delay timer leaves certain time for downstream PE devices to join the Switch-MDT, minimizing data loss during a switchover. The Switch-Delay timer value is configurable.

  5. After the Switch-MDT switchover is complete, the source PE still periodically sends switchover notification packets, allowing new PE devices to join the Switch-MDT. When no receiver exists at the sites connected to the downstream PE devices, the PE devices can leave the Switch-MDT.

Figure 7-13 shows a Share-MDT to Switch-MDT switchover process.

Figure 7-13  Share-MDT to Switch-MDT switchover

In Figure 7-13, before the Switch-MDT is configured, PE1 encapsulates a VPN multicast data packet (192.168.1.1, 225.1.1.1) into a public network data packet (10.1.1.1, 239.1.1.1) and sends the packet along the Share-MDT. PE2 and PE3 receive the packet and decapsulate it. PE3 drops the packet because there are no receivers at the site connected to it. The site connected to PE2 has receivers, so PE2 sends the decapsulated VPN multicast data packet to CE2.

After the Switch-MDT switchover condition and Switch-Group-Pool are configured on PE1, PE1 monitors packets sent from the multicast source. When the switchover conditions are met, PE1 selects group address 238.1.1.1 from the Switch-Group-Pool and periodically sends switchover notification packets to other PE devices along the Share-MDT.

The site connected to PE2 has receivers, so PE2 sends a PIM Join message to join group 238.1.1.1 and establishes a Switch-MDT. The site connected to PE3 has no receivers, so PE3 does not join the Switch-MDT. Only PE2 will receive all the public data packets (10.1.1.1, 238.1.1.1) encapsulated from VPN multicast data packets (192.168.1.1, 225.1.1.1).

NOTE:

After a switchover from the Share-MDT to the Switch-MDT, only multicast data packets are transmitted along the Switch-MDT. Multicast protocol packets are still transmitted along the Share-MDT.

Switchback from the Switch-MDT to the Share-MDT

If the switchover conditions are no longer met when VPN multicast data packets are transmitted along the Switch-MDT, PE1 switches multicast data packets back to the Share-MDT. The switchback is performed in either of the following circumstances:

  • The rate of VPN multicast data packets stays below the switchover threshold during the timeout period of the Switch-Holddown timer.

    In some cases, the rate of VPN multicast data packets fluctuates above and below the switchover threshold. To prevent frequent switchover between the Switch-MDT and Share-MDT, the system does not perform a switchover immediately upon detecting that the forwarding rate is lower than the switchover threshold. Instead, the system starts the Switch-Holddown timer and continues detecting the data forwarding rate before the timer expires. If the rate remains lower than the switchover threshold when the Switch-Holddown timer expires, the transmission path of data packets is switched to the Share-MDT. If the rate becomes higher than the switchover threshold, the packets are still forwarded along the Switch-MDT. The Switch-Holddown timer value is configurable.

  • The Switch-Group-Pool is changed, and the Switch-Group address used for encapsulating VPN multicast data packets does not exist in the new Switch-Group-Pool.

  • The advanced ACL rules controlling the switchover to the Switch-MDT are changed, and the new ACL rules deny VPN multicast data packets.

Translation
Download
Updated: 2019-08-21

Document ID: EDOC1000141903

Views: 153036

Downloads: 111

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next