No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - IP Multicast

CloudEngine 12800 and 12800E V200R005C10

This document describes the configurations of IP multicast, including IP multicast basics, IGMP, MLD, PIM (IPv4), PIM (IPv6), MSDP, multicast VPN, multicast route management (IPv4), multicast route management (IPv6), IGMP snooping, MLD snooping, static multicast MAC address, multicast VLAN, multicast network management.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Key Technologies for Multicast VPN Implementation

Key Technologies for Multicast VPN Implementation

The following key technologies are used for implementing multicast VPN:

  • Share-MDT establishment on the public network: After multicast VPN is configured, a Share-MDT is automatically established, along which VPN multicast packets are transmitted.
  • Encapsulation and decapsulation of VPN multicast packets on PEs: Upon receiving a VPN multicast packet from a CE, a PE uses GRE to encapsulate the packet into a public network data packet and forwards the encapsulated packet along the Share-MDT. Upon receiving the packet, other PEs decapsulate the packet and determine whether to forward the packet to their connected CEs.
  • Reverse path forwarding (RPF) check on PEs: CE and P devices use the same RFP check mechanism before and after multicast VPN is configured. That is, the RPF interface is the outbound interface of the unicast route to the source address, and the RPF neighbor is the next hop of the unicast route. For PEs, the RPF check mechanism used in the public network instance remains the same before and after multicast VPN is configured. In a VPN instance, the RPF interface and neighbor are defined depending on the type of outbound interface of the unicast route to the source address.
  • VPN multicast packet transmission on the public network: VPN multicast packets are transmitted in different procedures based on the PIM protocols used on the public network.
  • Switch-MDT switchover: When the Share-MDT is used, all PEs receive packets from the multicast source regardless of whether receivers exist at the connected sites. After the Switch-MDT is used, only PEs connected to sites where receivers exist receive packets from the multicast source, lowering burdens on PEs.

Share-MDT Establishment

A Share-MDT uses the Share-Group address as the multicast group address. A Share-Group address uniquely identifies a Share-MDT on a VPN.

NOTE:
  • The multicast source address of a Share-MDT (MTI IP address) must be the IP address of the interface used to establish IBGP connections to other PEs. In most cases, the multicast source address of a Share-MDT is the IP address of a loopback interface.
  • The multicast group address of a Share-MDT (Share-Group address) must be planned before multicast VPN deployment. The group address of an MD configured on all PEs must be the same. However, different MDs must have different group addresses.

As shown in Figure 7-6, PIM-SM runs on the public network and the P device functions as the rendezvous point (RP). The process for establishing a Share-MDT is as follows:

  1. PE1 sends a Join message to the RP on the public network through the public network instance. The Join message uses the Share-Group address as the multicast group address. PEs that receive the Join message create the (*, 239.1.1.1) entry. Meanwhile, PE2 and PE3 send Join messages to the RP. An RPT is established in the MD, which uses the RP as the root and uses PE1, PE2, and PE3 as leaves.

  2. PE1 sends a Register message to the RP through the public network instance. The Register message uses the MTI address as the multicast source address and the Share-Group address as the multicast group address. The RP creates the (10.1.1.1, 239.1.1.1) entry after receiving the Register message. Meanwhile, PE2 and PE3 send Register messages to the RP. Three independent SPTs that connect PEs to the RP are established in the MD.

On the PIM-SM network, an RPT (*, 239.1.1.1) and three independent SPTs form a Share-MDT.

Figure 7-6 Establishing a Share-MDT on a PIM-SM network

Encapsulation and Decapsulation of VPN Multicast Packets

Multicast packets of the following types are converted and transmitted in multicast VPN implementation:

  • Public network protocol packet: is used to establish MDTs on the public network and not to encapsulate VPN packets.
  • Public network data packet: is used to encapsulate VPN data and protocol packets, implementing transparent transmission of VPN packets in an MD.
  • VPN protocol packet: is used to establish MDTs across the public network.
  • VPN data packet: is used to carry VPN multicast data.
Encapsulation and Decapsulation of Multicast Packets

VPN packets (including protocol packets and data packets) are encapsulated into public network data packets using GRE. When a VPN packet reaches a PE from a CE, the PE encapsulates the packet using GRE. The PE uses the BGP interface address as the multicast source address and uses the Share-Group address as the multicast group address to encapsulate VPN packets into public network data packets and then delivers the encapsulated packets to the public instance. All PEs bound to the VPN instance receive the public network data packets. Upon receiving the public network data packet, a PE decapsulates it. If receivers of the multicast group exist at the site connected to the PE, the PE forwards the packet. If no receiver exists, the PE discards it.

Figure 7-7 shows the process of encapsulating and decapsulating VPN data packets. When receiving a VPN data packet (192.168.1.1, 225.1.1.1) from CE1, PE1 uses GRE to encapsulate the packet, with 10.1.1.1 as the multicast source address and 239.1.1.1 as the multicast group address. Then the packet becomes a public network data packet (10.1.1.1, 239.1.1.1). Upon receiving the packet, PE2 and PE3 decapsulate the packet into a VPN packet (192.168.1.1, 225.1.1.1). Finding that receivers exist at the connected site, PE2 forwards the packet to CE2. Finding that no receiver exists at the connected site, PE3 discards the packet.

Same as VPN data packets, VPN protocol packets from CE2, such as Join messages, are encapsulated into public network data packets and transmitted on the public network.

Figure 7-7 Encapsulation and decapsulation of VPN multicast packets

RPF Check on PEs

On a network without multicast VPN configured, devices select an optimal route to a multicast source from the routing table as the RPF route. The RPF route carries RPF information including the RPF interface (outbound interface of the unicast route) and RPF neighbor (next hop of the unicast route). RPF neighbor information is used for constructing PIM Join/Prune messages. RPF interface information is used for performing RPF check. Packets reaching the RPF interface pass the RPF check and are forwarded through the outbound interface. Packets reaching non-RPF interfaces fail the RPF check and are discarded. For details about the RPF check, see RPF Check.

On a network with multicast VPN configured, the CEs and P devices cannot detect VPNs and use the same RPF check mechanism as that before multicast VPN configuration. The RPF interface is the outbound interface of a unicast route to the source address, and the RPF neighbor is the next hop of the unicast route.

The RPF check mechanism on PEs is the same as that before multicast VPN is configured. The RPF check mechanism in VPN instances defines RPF information based on the outbound interface of the unicast route. In this way, VPN packets can pass the RPF check and VPN MDTs are established across the public network.

VPN Interface as the Outbound Interface of the Unicast Route

When the outbound interface of a unicast route is a VPN interface, RPF information is the same as that before multicast VPN is configured.

As shown in Figure 7-8, upon receiving packets from CE1, VPN interface PE1-if1 functions as the outbound interface to the multicast source 192.168.1.1/24, and the next hop address of the packets is the IP address of CE1-if1. In this case, PE1-if1 is the RPF interface and CE1 is the RPF neighbor. When PE1 receives packets from 192.168.1.1 on PE1-if1, the packets pass the RPF check and are forwarded to the outbound interface.

Figure 7-8 RPF information when the outbound interface is a VPN interface

Public Network Interface as the Outbound Interface of the Unicast Route

When the outbound interface of the unicast route is a public network interface, the RPF interface is the MTI on the local PE. A remote PE meeting the following conditions is the RPF neighbor of the local PE:

  • The remote PE is the next hop of the BGP route from the local PE to the multicast source.
  • The remote PE is the PIM neighbor of the local PE.
NOTE:

The neighbor relationship between PEs is established through an MTI. The address of PIM neighbor is the IP address of the MTI on the peer PE. Therefore, a PE becomes the RPF neighbor of the local PE only when the IP address of the MTI is the same as that of the interface used for establishing IBGP connections.

As shown in Figure 7-9, public network interface PE2-if2 functions as the outbound interface to the multicast source 192.168.1.1/24, and the next hop is 10.1.1.1, next hop of the BGP route from PE2 to the multicast source. The MTI on PE2 is the RPF interface. PE1 (10.1.1.1/24) is the next hop of the BGP route from PE2 to the multicast source as well as the PIM neighbor of PE2. Therefore, PE1 is the RPF neighbor of PE2. When PE2 receives packets from 192.168.1.1 on the MTI of PE2, the packets pass the RPF check and are forwarded to the outbound interface.

Figure 7-9 RPF information when the outbound interface is a public network interface

VPN Multicast Packet Transmission on the Public Network

Packet transmission on the public network is transparent to VPN instances. A VPN instance only needs to know that VPN packets are sent from an MTI and reach the remote PE. The process of packet forwarding on the public network is complex. This process is the MDT transmission process. MDT transmission starts after a Share-MDT is established.

Transmission Process Based on the Share-MDT
  1. The VPN instance on a PE sends VPN multicast packets to the MTI.

  2. The PE cannot identify whether the VPN multicast packets are protocol or data packets. It uses the MTI address (IP address of the interface for establishing IBGP connections) as the multicast source address and the Share-Group address as the multicast group address to encapsulate the VPN multicast packets into public network multicast date packets.

  3. The PE forwards the encapsulated data packets to the public network instance. Then packets are then sent to the public network through the public network instance.

  4. The public network multicast data packets are forwarded along the Share-MDT until they reach the public network instance on the remote PE.

  5. The remote PE decapsulates the public network multicast data packets and forwards the VPN multicast packets to the VPN instance.

For multicast data transmission along the Switch-MDT, see Switch-MDT Switchover.

Process of Transmitting VPN Multicast Protocol Packets

When PIM-SM runs on a VPN:

  • Hello messages are exchanged between MTIs to establish PIM neighbor relationships.

  • If receivers and the VPN RP belong to different sites, PEs send Join messages across the public network to create an RPT.

  • If the multicast source and the VPN RP belong to different sites, PEs initiate Register messages across the public network to create an SPT.

In the following example, PIM-SM runs on the public network and VPNs. Receivers on the VPNs send Join messages across the public network. The following describes the process of transmitting multicast protocol packets along the Share-MDT.

As shown in Figure 7-10, Receiver on VPNA belongs to Site2 and is directly connected to CE2. CE1 is the RP of group G (225.1.1.1) and belongs to Site1.

Figure 7-10 Process of transmitting multicast protocol packets

The process of transmitting multicast protocol packets is as follows:

  1. Receiver instructs CE2 to receive and forward data of G (225.1.1.1) using the Internet Group Management Protocol (IGMP). CE2 creates the (*, 225.1.1.1) entry and sends a Join message to the VPN RP (CE1).

  2. The VPN instance on PE2 receives the Join message sent by CE2, creates the (*, 225.1.1.1) entry, and specifies the RPF interface (MTI) as the upstream interface. The VPN instance on PE2 considers that the Join message has been sent from the MTI.

  3. Before sending the Join message to the P device, PE2 uses the Generic Routing Encapsulation (GRE) to encapsulate the message, with the MTI address as the multicast source address and the Share-Group address as the multicast group address. The encapsulated packet then becomes a public network multicast data packet (10.1.2.1, 239.1.1.1). PE2 forwards the multicast data packet to the public network instance.

  4. The multicast data packet (10.1.2.1, 239.1.1.1) is forwarded to the public network instance on each PE along the Share-MDT. Each PE decapsulates the packet to restore the Join message destined for the VPN RP. PEs check the RP information carried in the Join message. PE1 finds that the RP (CE1) belongs to the directly connected site and sends the Join message to the VPN instance. Upon receiving the message, the VPN instance on PE1 considers that the message is obtained on the MTI. Then PE1 creates the (*, 225.1.1.1) entry, specifies the MTI as the downstream interface and the interface facing CE1 as the upstream interface, and sends the message to the VPN RP. PE3 discards the Join message.

  5. When receiving the Join message, CE1 updates or creates the (*, 225.1.1.1) entry. A VPN multicast RPT is established across the public network.

Process of Transmitting VPN Multicast Data Packets

When PIM-ASM runs on a VPN:

  • If receivers and the VPN RP belong to different sites, VPN multicast data packets are transmitted across the public network along the VPN RPT.

  • If the multicast source and receivers belong to different sites, the VPN multicast data packets are transmitted across the public network along the VPN SPT.

When PIM-SSM runs on a VPN, VPN multicast data packets are transmitted along the VPN SPT across the public network.

In the following example, PIM-SM runs on the public network, and PIM-SSM runs on the VPNs. VPN multicast data packets are transmitted across the public network along the SPT. The following describes the process of transmitting multicast data packets along the Share-MDT.

As shown in Figure 7-11, Source on VPNA sends multicast data packets to G (232.1.1.1). Receiver on VPNA is directly connected to CE2 and belongs to Site2.

Figure 7-11 Process of transmitting multicast data packets

The process of transmitting VPN multicast data packets is as follows:

  1. Source sends a VPN multicast data packet (192.168.1.1, 232.1.1.1) to CE1.

  2. CE1 forwards the packet to PE1 along the SPT, and PE1 searches for the forwarding entry in the VPN instance. If the list of outbound interfaces in the forwarding entry contains the MTI, PE1 forwards the VPN multicast data to the P device for further processing. The VPN instance on PE1 considers that the VPN multicast data has been sent from the MTI.

  3. Before sending the VPN multicast data packet to the P device, PE1 uses GRE to encapsulate the packet, with the MTI address as the multicast source address and the Share-Group address as the multicast group address. The packet then becomes a public network multicast data packet (10.1.1.1, 239.1.1.1). PE1 forwards the encapsulated multicast data packet to the public network instance.

  4. The multicast data packet (10.1.1.1, 239.1.1.1) is forwarded to the public network instance on each PE along the Share-MDT. Each PE decapsulates the packet to restore the VPN multicast data packet and sends the packet to the VPN instance. If a PE has a downstream interface of the SPT, the PE forwards the VPN multicast data packet. If a PE has no downstream interface of the SPT, the PE discards the VPN multicast data packet.

  5. PE2 searches for the forwarding entry in the VPN instance and sends the VPN multicast data packet to Receiver. Transmission of this VPN multicast data packet is complete.

Switch-MDT Switchover

When a Share-MDT is used to transmit VPN multicast data packets, the packets are received on all PEs bound to a VPN instance regardless of whether receivers exist at the sites connected to the PEs. If the rate of VPN multicast data packets is high, data may be flooded on the public network. This wastes network bandwidth and increases burdens on PEs.

The multicast VPN technology brings an optimized solution Switch-MDT. The following assumes that a Share-MDT has been established and describes the Switch-MDT implementation.

Switchover from Share-MDT to Switch-MDT
  1. When a PE with a Switch-Group-Pool configured detects traffic on the Share-Group, it triggers a switchover from Share-MDT to Switch-MDT.
  2. The source PE is assigned an unused Switch-Group address from the Switch-Group-Pool and periodically sends the switchover notification packet to the downstream PEs along the Share-MDT. The switchover notification packet carries the VPN multicast source, VPN multicast group, and Switch-Group address.
  3. When receiving the switchover notification packet, downstream PEs check whether receivers exist at the connected sites. If receivers exist, the PE sends a PIM Join message for joining the Switch-MDT whose multicast group address is the Switch-Group address and root is the source PE. If no receiver exists at the sites, the PE caches the switchover notification packet and joins the Switch-MDT when receivers exist.
  4. The source PE sends the MDT switchover packet and waits for the timeout period of the Switch-Delay timer. If the switchover condition is still met, the source PE stops encapsulating VPN multicast data using the Share-Group address and uses the Switch-Group address to encapsulate VPN multicast data. Multicast data is transmitted along the Switch-MDT.

    The advantage is that the downstream PEs have time for joining the Switch-MDT, minimizing data loss. The Switch-Delay value can be configured according to the network requirements.

  5. After the Switch-MDT switchover is complete, the source PE still periodically sends switchover notification packets so that new PEs join the Switch-MDT. When no receiver exists at the sites connected to the downstream PEs, the PEs can exit from the Switch-MDT. After the switchover is complete, traffic will not be switched back to the Share-Group.

Figure 7-12 shows the switchover process from Share-MDT to Switch-MDT.

Figure 7-12 Switchover from Share-MDT to Switch-MDT

On the network shown in Figure 7-12, before the Switch-MDT is configured, PE1 encapsulates a VPN multicast data packet (192.168.1.1, 225.1.1.1) into a public network data packet (10.1.1.1, 239.1.1.1) and sends the packet along the Share-MDT. PE2 and PE3 receive the packet and decapsulate it. Finding no receiver at the connected site, PE3 discards the packet. Finding receivers at the connected site, PE2 sends the decapsulated VPN multicast data packet to CE2.

After the Switch-MDT (including the switchover condition and Switch-Group-Pool) is configured for PE1, PE1 monitors packets sent from the multicast source. When the packets meet the switchover condition, PE1 selects multicast group address 238.1.1.1 from the Switch-Group-Pool and periodically sends a switchover notification packet to other PEs through the Share-MDT.

Finding receivers at the connected site, PE2 sends a PIM Join message to join multicast group 238.1.1.1 and establishes a Switch-MDT. Finding no receiver at the connected site, PE3 does not join the Switch-MDT. Then only PE2 receives all the public data packets (10.1.1.1, 238.1.1.1) that are encapsulated from VPN multicast data packets (192.168.1.1, 225.1.1.1).

NOTE:

After the Share-MDT is switched to the Switch-MDT, only multicast data packets are transmitted along the Switch-MDT, and multicast protocol packets are still transmitted along the Share-MDT.

Translation
Download
Updated: 2019-04-20

Document ID: EDOC1100074724

Views: 45457

Downloads: 13

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next