Overview
While Cisco VxLAN leverages BGP EVPN for the control plane, it requires mechanisms to manage Broadcast, Unknown Unicast, and Multicast (BUM) traffic within the VxLAN fabric. VxLAN fabrics typically rely on multicast replication in the underlay network to efficiently forward BUM traffic. Although ingress replication serves as an alternative method for handling BUM traffic within VxLAN fabric, it is generally considered less efficient.
VXLAN EVPN with PIM Any Source Multicast (ASM) is a commonly chosen method for managing BUM traffic (Broadcast, Unknown Unicast, and Multicast) in VXLAN fabrics. However, it’s crucial to consider its scaling limitations when selecting the deployment method between multicast groups and layer 2 segments. Options include a global approach (one multicast group for all layer 2 segments), one group per layer 2 segment, or one group for multiple layer 2 segments. Each approach has its benefits and trade-offs, and the choice depends on factors such as network size, traffic patterns, and scalability requirements.
Global approach: Single multicast group for all layer 2 segments
- Simple Configuration: Requires configuring only one multicast group on the underlay network for all VXLAN segments. This can be easier to set up initially.
- Efficient Control Plane Scaling: Reduces the number of multicast group entries needed on the underlay network devices, which can be beneficial for large deployments.
- Suboptimal Data Plane Traffic:
- All BUM traffics are sent to all VTEPs, regardless of their destination VXLAN segment. This can lead to unnecessary traffic on some VTEPs and potentially impact data plane performance.
- There may be instances of suboptimal load sharing for BUM (Broadcast, Unknown Unicast, Multicast) traffic across multiple uplinks to spines.
One Multicast Group per Layer 2 Segment:
- Granular BUM Traffic Forwarding:
- Each VXLAN segment can have its own multicast group. This ensures packets are only sent to VTEPs participating in that specific segment, improving data plane efficiency.
- BUM (Broadcast, Unknown Unicast, Multicast) traffic load sharing across multiple uplinks to spines in a VXLAN EVPN fabric is optimized since this option involves configuring multicast groups to evenly distributed across the available uplinks.
- Increased Configuration Complexity: Requires configuring a separate multicast group for each VXLAN segment, which can be more complex to manage in large deployments.
- Potential Control Plane Scaling Issues: A larger number of multicast group entries on underlay devices might impact control plane scalability in very large deployments.
The selection among these approaches depends on factors like network size, performance requirements, and capabilities of the underlay network. Another viable option involves using one multicast group for multiple layer 2 segments with similar deployment requirements. This approach will optimize the resource utilization and minimize the impact on data plane performance. By understanding these considerations and implementing optimization tactics, you can ensure efficient BUM traffic forwarding in your VXLAN EVPN fabric using PIM ASM.
Salability Numbers
Several factors significantly influence the overall scalability of PIM ASM for BUM traffic forwarding in a VXLAN EVPN fabric.
Underlay Network Multicast – PIM ASM
The number of supported multicast groups and multicast routes by the platforms are a crucial factors for overall scalability and deployment design for the BUM traffic forwarding.
The following are scalability numbers of Underlay multicast groups and IPv4 multicast routes for Nexus 9k 10.4(x) releases
Feature | Platform | Scale limit |
Underlay multicast groups | Nexus 9300-FX/FX2/FX3/GX/GX2/H2R, Nexus 9408 switches, Nexus 9700-EX/FX and X9716D-GX line cards | 512 |
IPv4 multicast routes | Nexus 9700-FX line cards, | 8192 (Layer 2 + Layer 3); 32,768 (layer 2 + Layer 3 with system routing template – multicast -heavy mode); 8192 (with system routing template – lpm – heavy mode) |
Based on the underlay multicast groups scale limit, if the layer 2 segments in the environments are less than 512 one to one mapping between multicast group and layer 2 segments (VNIs) is possible. But how about the other scale limits like IPv4 multicast routes?
Closely monitoring and analyzing the IPv4 multicast route requirements in spines is critical because they can quickly reach the scale limit. As the number of multicast routes increases, it can lead to resource exhaustion and impact the performance and scalability of the network.
It’s important to note that in VXLAN EVPN fabrics, each spine typically maintains one source tree per multicast group per leaf switch. A 9500 spine with a 9700-FX line card has a default scale limit of 8192 IPv4 multicast routes. This limitation can impact the scalability of the network, particularly as the number of multicast groups (layer 2 segments) and leaf switches increases. Therefore, it’s essential to carefully select the multicast group and layer 2 segment mapping.
Mapping each multicast group to a single layer 2 segment in a one-to-one fashion:
- IPv4 multicast route on spine switches = number of VNIs ( * , G) + number of VNIs * number of VTEPs ( S , G)
- For 512 VNIs (Max limit) -> IPv4 multicast routes = 512 + 512*number of VTEPs
- If the number of VTEPs (VxLAN Tunnel Endpoints) exceeds 15, the default limit of IPv4 multicast routes may be exceeded, leading to scalability issues. In such cases, there are two potential solutions:
- Adjust default IPv4 multicast routes limit.
- Employ one-to-many deployment method.
In larger networks with more VTEPs and potentially more multicast groups, scalability limitations can become a concern. It’s essential to carefully consider the scalability numbers and choose a deployment method that avoids these limits. One approach is to use a deployment method that reduces the total number of multicast groups required, such as mapping multiple layer 2 segments to a single multicast group. By aggregating layer 2 segments under fewer multicast groups, you can mitigate scalability constraints and ensure efficient resource utilization in the VXLAN EVPN fabric. This helps maintain optimal performance and scalability as the network grows.
Mapping multiple layer 2 segments to a single multicast group (fewer multicast groups in the VxLAN EVPN Fabric):
- IPv4 multicast route on spine switches = number of multicast groups ( * , G) + number of multicast groups* number of VTEPs ( S , G)
- For 512 VNIs (Max limit) -> IPv4 multicast routes = fewer multicast groups + fewer multicast groups*number of VTEPs
- If we aggregate to 4 multicast groups -> For 512 VNIs (Max limit) -> IPv4 multicast routes = 4 + 4 * number of VTEPs
- This deployment method can effectively ensure that the default limit of IPv4 multicast routes won’t be exceeded, even when utilizing the maximum VTEPs (scale limit). By carefully managing the allocation of multicast groups and layer 2 segments, scalability concerns can be mitigated, and the VXLAN EVPN fabric can operate within its intended capacity without encountering limitations related to multicast route scaling. This approach helps maintain optimal performance and scalability.
Mapping all layer 2 segments to a single multicast group:
While the deployment method effectively addresses scalability concerns, it’s essential to be aware of potential drawbacks, such as inefficient BUM traffic distribution. Multicast packets destined for a single multicast group are forwarded to all VTEPs in the VXLAN EVPN fabric, regardless of their destination VXLAN segment. This can result in unnecessary traffic on certain VTEPs, potentially impacting BUM traffic data plane performance and network efficiency. Therefore, it’s important to carefully consider these factors and implement the right multicast group and layer 2 segment mapping.
PIM ASM Deployment Consideration
- Use spine switches as RP (Rendezvous Point). Use redundant RP across multiple spine switches for redundancy and load share.
- Reserve a range of multicast groups.
- Select an efficient multicast group(s) to layer 2 segments mapping based on specific network requirements.
- All VTEPs that serve a VNI join a shared multicast tree.
Monitoring and Optimization
Tools like Cisco iCAM (intelligent CAM Analytics and Machine learning) can be used to monitor and optimize configurations:
https://deliabtech.com/short-blog-and-tip/cisco-icam-monitor/
- Multicast Routing Monitoring: Track multicast routing against the Cisco verified scalability limit.
- Optimizing Multicast Group Membership: Fine-tune which VTEPs to participate in specific multicast groups to minimize unnecessary traffic forwarding.
- Monitor VxLAN limits: Track number VTEPs per site, VTEP peers, Underlay multicast group, and other limits related to VxLAN.
There’s no single “magic ” way for VxLAN underlay multicast deployment method for BUM traffic, It’s a combination of factors specific to your network design, traffic patterns, and underlay network capabilities. By understanding these factors and implementing optimization tactics, you can ensure efficient BUM traffic forwarding in your VXLAN EVPN fabric using PIM ASM.
Outputs from Different Deployment Models
- IPv4 multicast routes output for 1 global multicast group for all layer 2 segments (350 NVI segments), 3 VTEPS
Spine-2# sh ip mroute summary | i Total
Total number of routes: 5
Total number of (*,G) routes: 1
Total number of (S,G) routes: 3
Total number of (*,G-prefix) routes: 1
- IPv4 multicast routes output based on 1 to 1 mapping (350 VNI segments), 3 VTEPS
Spine-2# sh ip mroute summary | i Total
Total number of routes: 1401
Total number of (*,G) routes: 350
Total number of (S,G) routes: 1050
Total number of (*,G-prefix) routes: 1
- IPv4 multicast routes output based on 1 to 1 mapping (350 VNI segments), adding one additional VTEP, 4 VTEPS in total
Spine-2# sh ip mroute summary | i Total
Total number of routes: 1751
Total number of (*,G) routes: 350
Total number of (S,G) routes: 1400
Total number of (*,G-prefix) routes: 1
The outputs demonstrate the rapid approach to the limit when employing a one-to-one mapping strategy. This approach, where each multicast group corresponds to a single layer 2 segment, can lead to an accelerated increase in the number of IPv4 multicast routes as the network scales. As a result, it’s essential to proactively address scalability concerns by considering alternative deployment methods.
- IPv4 multicast routes output based on 4 multicast groups and 350 VNI segments, 4 VTEPS
Spine-2# sh ip mroute summary | i Total
Total number of routes: 21
Total number of (*,G) routes: 4
Total number of (S,G) routes: 16
Total number of (*,G-prefix) routes: 1
This approach significantly reduces the number of IP multicast routes. By distributing BUM traffic load across multiple uplinks to spines and assigning multicast groups based on the deployment location of layer 2 segments (leaf switches), BUM traffic can be efficiently forwarded to its intended destinations while minimizing unnecessary traffic replication. This optimization ensures efficient utilization of network resources and helps maintain optimal BUM traffic data plane performance in VXLAN EVPN fabrics.