VXLAN EVPN Multi-Site – NDFC

Belete Ageze 2xCCIE | CCDE

In today’s fast-paced digital realm, businesses continually seek ways to swiftly provide adaptable services, meeting ever-changing customer expectations. They aim to enhance agility and productivity to maintain a competitive edge, while also optimizing costs and identifying opportunities for savings.. VxLAN EVPN Multi-site architecture stands out as a robust solution, addressing these urgent needs by offering a secure and scalable network foundation conducive to seamless digital transformation.

For years, Virtual Local Area Networks (VLANs) have been the backbone of data center network segmentation. However, as the digital landscape evolves rapidly, VLANs struggle to keep up with the growing demands such as:

  • Scalability: Limited by a 12-bit identifier, VLANs can only support a maximum of around 4,000 distinct logical networks. This poses a significant challenge for large, modern data centers.
  • Multi-tenancy: supporting multiple tenants requires strict isolation, which VLANs can find increasingly difficult to manage effectively.
  • Resiliency: Spanning-tree protocol, used to prevent loops in VLANs, often leads to inefficient use of network links. This can impact performance and overall network resiliency.

In contemporary software development, modern applications are constructed using a micro-services architecture, characterized by a distributed system in which functionalities are divided into smaller, independent services. This distributed approach to application design highlights the shortcomings of VLANs, impeding the creation of expansive, secure, and multi-tenant data center infrastructures.

Virtual Local Area Networks (VLANs) have played a crucial role in data centers over the years. Nevertheless, their shortcomings in scalability, multi-tenancy, and efficiency are becoming more evident in today’s digital era. This underscores the necessity for a more scalable, secure, and adaptable solution for data center networks. As digital transformation continues to reshape the IT landscape, pioneering solutions like VXLAN and EVPN are emerging to meet these evolving requirements.

Scaling Layer 2 Segmentation

VxLAN emerges as a potent remedy, tackling the deficiencies of VLANs head-on. This overlay technology harnesses a 24-bit identifier, facilitating the establishment of an extensive 16 million distinct logical segments—an impressive leap beyond the constraints of VLANs. Moreover, by employing an IP-based underlay, VxLAN eradicates the necessity for Spanning Tree Protocol (STP), which often results in inefficient utilization of network resources. This optimization enhances the flow of data across available links.

Control Plane Learning – EVPN

While VxLAN offers impressive scalability, its flood-and-learn mechanism might not be ideal for extremely large, secure, and multi-tenant data center environments. This is where EVPN (Ethernet VPN) comes into play.

EVPN acts as the intelligent control plane for VxLAN, addressing the limitations of flood-and-learn. It utilizes Multi-Protocol BGP with the l2vpn evpn address family to efficiently exchange Layer 2 and Layer 3 information between network devices.

VxLAN and EVPN

The combined power of VxLAN and EVPN delivers a robust and scalable solution for modern data center needs. VxLAN provides the foundation for massive Layer 2 segmentation, while EVPN’s control plane learning ensures efficient and secure communication across large, multi-tenant environments.

This powerful duo empowers the construction of:

  • Large-Scale Infrastructure: VxLAN’s massive address space allows for hundreds of thousands of independent systems, ideal for web-scale environments.
  • Enhanced Security:EVPN’s control plane learning mechanism plays a crucial role in facilitating secure communication and isolation between tenants within a multi-tenant fabric. Additionally, security can be further strengthened by leveraging the VxLAN header for deploying Group Policy Option (GPO). Stay tuned for more insights on this topic in upcoming blogs.
  • Resilient Network: The underlying IP nature of VxLAN eliminates reliance on Spanning Tree Protocol, improving network resiliency.

Cisco’s VxLAN EVPN Multi-site fabric utilizes VxLAN encapsulation to tunnel Layer 2 / 3 traffic across an underlying IP network. mBGP, acting as the control plane, facilitates efficient learning and communication between endpoints at different sites.

Here are some of the key reasons why Cisco VxLAN EVPN Multi-site architecture is a compelling solution for data centers:

  • VxLAN EVPN Multi-site shines when it comes to connecting geographically dispersed data centers. It provides a seamless way to extend Layer 2 and Layer 3 connectivity across these locations, creating a single, unified data center fabric. This is ideal for organizations with geographically distributed workloads or disaster recovery needs.
  • VxLAN EVPN Multi-site offers flexibility in choosing the underlying transport network between sites (MPLS, PTP, etc.). BGP EVPN within the architecture ensures secure communication and granular control over endpoint access.
  • Cisco NDFC streamlines the deployment and management of VXLAN EVPN Multi-site fabrics, presenting a simplified approach. Through automation, it diminishes configuration complexities and streamlines ongoing network management tasks. This enhances operational efficiency and facilitates smoother network administration processes.

VxLAN EVPN Multi-site Setup

Among the array of technologies available for extending VXLAN fabrics at Layers 2/3, this document focuses on VxLAN EVPN Multi-site architecture. This comprehensive interconnectivity strategy presents a unified solution for linking geographically distributed VXLAN BGP EVPN fabrics.

This document outlines the configuration of VxLAN EVPN Multi-site with two sites (SITE-65125 and SITE-65225), along with the setup of the inter-site network (ISN) to seamlessly extend Layer 2 and Layer 3 connectivity using anycast Border Gateway Routers (BGWs). The configuration process will be demonstrated using NDFC (Nexus Dashboard Fabric Controller), illustrating a straightforward approach to deploying and managing this network architecture.

What is anycast BGW?

Cisco Border Gateway nodes (BGWs) play a pivotal role in Cisco VxLAN EVPN Multi-site architecture. These nodes serve as crucial interconnection points, integrating BGP control plane functionality and enforcing security measures. They facilitate seamless communication and efficient data transport across geographically dispersed data centers. Cisco BGWs offer two primary deployment modes tailored to diverse requirements within your VxLAN EVPN Multi-site fabric: Anycast BGW and vPC BGW.

Key Functions of BGWs:

  • VxLAN Tunnel Termination: BGWs terminate VXLAN tunnels originating from other data center sites. This allows encapsulated Layer 2 traffic to be received and decapsulated for further processing.
  • BGP Control Plane: BGWs participate in BGP peering sessions with their counterparts in other locations. They exchange VXLAN EVPN information, including reachability details for Layer 2 segments and endpoints across the entire multi-site fabric.
  • Security and Policy Enforcement: BGWs can enforce BGP access control policies. These policies define which endpoints in one site can communicate with endpoints in another, ensuring security and isolation between tenants or departments across geographically dispersed locations.
  • VXLAN Tunnel Endpoints (VTEPs) within each site’s VXLAN EVPN fabric only have visibility of their local overlay domain (their own site) and their internal neighbors. This includes the Anycast BGWs acting as the interconnection points.
  • Traffic Routing: All routes external to the local fabric, meaning routes for endpoints in remote sites, have the Anycast BGWs as their next hop for both Layer 2 and Layer 3 traffic. This is because VTEPs rely on BGWs to learn about and communicate with endpoints in other sites.

The Setup uses Nexus 9000v (10.4(3)) switches .

Assumptions

IP Addressing for the Lab Setup

The Logical Representation of the Lab Setup

Expected Result

  1. Full reachability between hosts on both sites.
  2. Test using ping between Host-10.10.20.10 (SITE1), Host-10.10.20.20 (SITE2), Host-10.10.40.10(SITE1) and Host-10.10.40.20(SITE2).

Step-by-Step Configuration

The following steps outline the process for fully configuring an operational VxLAN EVPN Multi-site data center infrastructure:

  1. Configure individual sites using ‘Data Center VxLAN EVPN’ fabric template
    • Data Center VxLAN EVPN sites – VxLAN-EVPN-65125 and VxLAN-EVPN-65225
  2. Configure ISN (Inter-Site Network) using ‘Multi-Site External Network’ fabric template
    • Multi-Site External Networks – External-65515 and External-65525
  3. Create MSD (Multi-Site Domain) Fabric
  4. Add VxLAN EVPN and Multi-Site External Network fabric as a child fabric to the MSD fabric
  5. Add overlay networks from the MSD network
  6. Verification

Configure Individual Sites

Configure the First Site – VxLAN-EVPN-65125

Nexus Dashboard Fabric Controller (NDFC) is a tool used to automate the configuration of VXLAN EVPN data center and other fabrics. NDFC offers a simplified and unique approach to deploying and managing VXLAN EVPN data center fabrics. Here’s a general outline of the configuration steps:

  1. Log to NDFC and create a new fabric
  2. Configure fabric parameters
  3. Add switches
  4. Verify and deploy

Ref – Building VxLAN EVPN Fabric using NDFC – https://deliabtech.com/data-center/ndfc-brownfield/

Log to NDFC and create a new fabric

LAN -> Fabrics and Actions -> Create Fabric

Enter Fabric Name, then click ‘Choose Fabric’ … available list of fabric templates displayed on a ‘Select Type of Fabric’ window.

Select ‘Data Center VXLAN EVPN’ template and click ‘Select’

Configure fabric parameters

General Parameters

Some key general parameters for configuring a VxLAN EVPN fabric:

BGP ASN (Autonomous System Number): A unique identifier used for BGP peering between internal and external devices in the control plane. This facilitates communication and exchange of routing information for Layer 2 and Layer 3 endpoints across the fabric.

Fabric Interface Numbering:

  • P2P (Point-to-Point) : This is a common configuration for the fabric interface between leaf and spine switches. A /30 subnet provides enough IP addresses for a point-to-point deployments while conserving address space.
  • Interface Unnumbered: Alternatively, the fabric interface can be configured as “unnumbered,” utilizing the IP address of another existing interface on the switch. IP unnumbered can be a convenient option in specific scenarios where saving IP addresses is a priority.

Underlay Routing Protocol:

  • OSPF (Open Shortest Path First): A popular choice due to its scalability, efficiency, and wide adoption in enterprise networks. OSPF dynamically calculates the shortest paths for routing traffic across the physical network that carries the VXLAN tunnels.
  • ISIS (Intermediate System to Intermediate System): Another routing protocol known for its reliability and ability to handle complex network topologies. Similar to OSPF, ISIS establishes optimal routing paths for the underlay network in a VxLAN EVPN fabric.

Route Reflectors: In a spine-leaf architecture, spine switches are often configured as route reflectors. This optimizes BGP routing by reducing the number of full mesh BGP peering, improving performance and scalability.

Anycast Gateway MAC: A single MAC address used by all leaf switches in the fabric to identify the distributed anycast gateway (DAG).

Always refer to the official Cisco NDFC documentation for detailed instructions and advanced configuration options.

Replication

VxLAN fabrics utilize multicast replication in the underlay network to manage Broadcast, Unknown Unicast, and Multicast (BUM) traffic efficiently. Although ingress replication is an alternative method for handling BUM traffic within VxLAN fabric, it is generally considered less efficient. VxLAN relies on the underlay network to handle BUM traffic effectively, leveraging multicast routing protocols to replicate a single copy of the BUM traffic and forward it to all interested receivers within the same Layer 2 segment (VXLAN overlay).

Multicast Routing Protocol Options:

PIM Any Source Multicast (ASM) is commonly employed to manage BUM traffic (Broadcast, Unknown Unicast, and Multicast) in VXLAN fabrics. It is particularly effective for traditional multicast applications characterized by well-defined multicast groups with a single source transmitting traffic to multiple receivers.

Bidirectional PIM (Bidir-PIM) is a protocol designed for scenarios where there can be multiple sources and receivers of multicast traffic within a single group. Unlike other multicast protocols, Bidir-PIM constructs bidirectional multicast trees between sources and receivers without maintaining source-specific state along each node of the tree. Instead, it utilizes designated rendezvous points (RPs) to optimize routing paths and facilitate efficient communication between multicast sources and receivers within the group.

As of this writing PIM BiDir is not supported for fabric underlay multicast replication with VXLAN Multi-Site.

Underlay Multicast Routing for VxLAN BUM Traffic – https://deliabtech.com/blogs/underlay-multicast-routing-for-vxlan-bum-traffic/

vPC

Cisco vPC (Virtual Port Channel) is a technology that allows you to create a single, logical Layer 2 link across two physical Cisco Nexus switches. Some key vPC parameters for configuring a VxLAN EVPN fabric:

  • vPC domain ID (typically a number between 1 and 4094) assigned to both switches, both switches need the same ID.
  • vPC Peer Keep-alive acts as a layer 3 communication channel for a periodic keepalive messages between the vPC peers. These messages allow each switch to confirm the health and operability of its counterpart. The link can be a dedicated point-to-point layer 3 link or the management interface.
  • vPC Peer-Link VLAN for an SVI interconnecting the VPC members for alternate path if all uplinks to the Spine fails. This requires similar configuration in regards of routing, BUM and mtu as your uplinks to the Spine.
  • vPC peer-link is a port-channel with standard 802.1Q trunk between the vPC domain peers that can perform the following actions:
    • Carry vPC VLANs.
    • Carry Cisco Fabric Services messages.
    • Carry flooded traffic between the vPC peer devices.
    • Carry STP BPDUs, HSRP hello messages, and IGMP updates.
  • vPC Delay Restore plays a crucial role in maintaining network stability and preventing packet loss during vPC peer device restarts. It ensures a smooth and graceful recovery process by allowing Layer 3 routing protocols to converge before re-introducing the recovered vPC legs into the network traffic flow.
  • vPC auto-recovery feature is used:
    • To provide a backup mechanism in case of vPC peer-link failure followed by vPC primary peer device failure.
    • If both vPC peer devices reload, by default all vPC member ports are suspended until peer adjacency is reestablished between vPC devices. If only one vPC peer device becomes operational, its local vPC ports will remain suspended. The vPC auto-recovery reload-delay feature allows the vPC peer device to assume the vPC primary role and bring up all local vPCs ports after the expiration of the delay timer.
  • vPC advertise pip  command instructs BGP to use the primary IP (PIP) address of the advertising vPC peer switch as next-hop instead of the Virtual IP (VIP) for external prefix routes or leaf-generated IP subnet routes (type 5 routes). Type 2, endpoint reachability information (MAC and IP addresses) of the endpoints or VTEPs, routes are still advertised with VIP.
  • vPC Fabric peering preserves all the characteristics of traditional vPC without using physical ports for vPC peer-link. provides enhanced dual-homing without wasting physical ports and optimized routing for single homed endpoints by advertising from the VTEPs primary IP (PIP).
Protocols

In Cisco NDFC (Nexus Dashboard Fabric Controller), the “Protocol” section when creating a VXLAN EVPN fabric plays a crucial role in defining the underlay routing protocol for the fabric. The specific details you might configure in the “Protocol” section can vary depending on the chosen routing protocol, under general parameters section.

  • OSPF: You might define the OSPF area ID, router ID, authentication, and other parameters for OSPF neighbor relationships between VTEPs.
  • ISIS: You might specify the ISIS level (L1 or L2), network type (broadcast or point-to-point), authentication, and other parameters for ISIS neighbor adjacencies.
  • NDFC also offer options for configuring BGP authentication, and BGP template configs to simplify and improve configuration scalability for larger fabrics.
Advanced

The default configurations offered in the ‘Advanced’ tab of the VxLAN EVPN NDFC (Nexus Dashboard Fabric Controller) build template are generally effective for a wide range of deployments. This tab empowers users with more precise control, allowing them to customize the fabric behavior to align with their specific network requirements.

By utilizing the ‘Advanced’ tab, users can benefit from a template-based approach, which streamlines configuration tasks by automating many aspects according to Cisco best practices. This not only simplifies the deployment process but also ensures that the fabric is configured optimally for performance, scalability, and reliability.

The “Advanced” tab offers options for customizing and fine-tuning various aspects of the fabric. Here are some key features:

  1. Interface MTU: This allows you to specify the Maximum Transmission Unit (MTU) for interfaces. MTU defines the maximum packet size that can be transmitted on a network segment.
  2. The Cisco Nexus Dashboard Fabric Controller (NDFC) offers two primary methods for configuring VXLAN EVPN fabrics VRF/Network (overlay) configuration; CLI and config-profile.
    • CLI: Config is pushed in a “traditional” CLI format.
    • Config-profile : All overlay configuration dependencies are managed in the same place (BGP, vlan, vni, ints, vrf, etc).
  3. Anycast Border Gateway advertise-pip: In scenarios where only a Layer 3 extension is configured on the Border Gateway (BGW) leaf nodes between sites in a multisite deployment, an additional loopback interface is indeed necessary. This loopback interface should be present in the same Virtual Routing and Forwarding (VRF) instance on all BGWs, with each BGW assigned its individual IP address to the loopback interface. it’s crucial to redistribute the IP address of the loopback interface into BGP EVPN, specifically toward the Site-External. Alternatively, the “advertise-pip” feature can be utilized in the BGW leaf nodes. This feature eliminates the need for configuring additional loopback addresses in VRFs, making the setup more straightforward and scalable.
  4. Enable Tenant DHCP: Enabling DHCP globally is a prerequisite for supporting DHCP for overlay networks that are part of the tenant VRFs. By enabling this feature, you allow DHCP functionality to be available throughout the fabric, facilitating IP address assignment for devices within the overlay networks associated with the tenant VRFs.
  5. Enable VXLAN OAM: Enables the VXLAM OAM functionality for devices in the VxLAN EVPN fabric. Enabling VXLAN OAM helps monitor and troubleshoot the VXLAN overlay network, ensuring optimal performance and reliability. It allows for the detection and resolution of issues related to connectivity, packet loss, latency, and other network-related problems within the VXLAN EVPN fabric.
  6. Site ID: In the context of moving a fabric within a Multi-Site Domain (MSD), it’s essential to assign a unique site ID to each member fabric. This site ID is mandatory for a member fabric to be part of an MSD. Each member fabric within an MSD should have a distinct site ID for identification purposes.
Advanced configuration tab – 1
Advanced configuration tab – 2
Advanced configuration tab – 3
Resources

In the Resources tab, you’ll find a summary of various fields related to the configuration of resources within the Cisco platform. While many of these fields are automatically generated based on Cisco recommended best practices, you have the flexibility to review and modify configurations as needed to align with specific requirements or changes in the network environment. Here’s a description of some of the fields typically found in the Resources tab:

  • Manual Underlay IP Address Allocation: By default, the Nexus Dashboard Fabric Controller dynamically allocates underlay IP address resources, such as loopbacks and fabric interfaces, from the defined pools. However, if you check the checkbox, the allocation switches to static. As a result, some of the dynamic IP address range fields become disabled.
  • Underlay loopback and subnet IP ranges: Specify IP subnet ranges as required.
  • VxLAN VNI and VLAN ranges: To configure the fabric according to the overall VXLAN EVPN design, you need to specify the required VNI (VXLAN Network Identifier) ranges and VLAN ranges.
  • VRF Lite Deployment: In the VXLAN EVPN fabric configuration using Cisco Nexus Dashboard Fabric Controller (NDFC), the option to choose between manual and automatic creation of VRF-Lite connections is available. You can select either ‘Manual’ or “Back2Back&ToExternal”.
    • Manuel: Selecting this option requires manual configuration of VRF-Lite connections. Administrators must individually configure each VRF-Lite connection, specifying the IP addresses and other parameters manually.
    • Back2Back&ToExternal: Choosing this option automates the creation of VRF-Lite connections for common Inter-Fabric Connection (IFC) scenarios. This includes establishing connections between border devices in an VxLAN EVPN Fabric and edge routers in an External Fabric. The IP addresses used for these VRF-Lite connections are automatically assigned from a pre-defined “VRF Lite Subnet IP Range” pool configured within NDFC.
    • https://deliabtech.com/data-center/vxlan-evpn-fabrics-vrf-lite/ – VRF Lite configuration

The blog will not cover the VXLAN EVPN manageability, bootstrap, configuration backup, and flow monitor tabs within the NDFC “Data Center VXLAN EVPN” fabric template, leaving them with their default settings.

Add Switches

Once a VXLAN EVPN fabric is created and configured within NDFC, the next step is to add switches to the fabric. Before doing so, ensure that the switches are accessible from NDFC and have their management interface configured for communication with NDFC. Verify that you have the necessary credentials (username and password) to access each switches. This will be required when adding the switches to the fabric in NDFC.

Log to NDFC -> LAN -> Fabrics -> Select the Fabric

Under the VxLAN EVPN Fabric Overview go to Switches -> Actions -> Add Switches

Discover Switches

On the ‘Switch Addition Mechanism’ page, ensure to provide the necessary information such as the seed IP, authentication protocols, and credentials (username and password) for accessing the switches. If it’s a greenfield deployment, uncheck the ‘preserve config’ attribute. However, if it’s a brownfield deployment, leave it checked. Finally, submit the information by clicking the ‘Discover Switches’ button. This action will initiate the process of discovering the switches to the VXLAN EVPN fabric within NDFC.

Once the switches are discovered, they will be listed in the discover results section. This list will include details such as the switch name, serial number, IP address, model, version, and status. From this list, select the switches you wish to add to the VXLAN EVPN fabric. After selecting the desired switches, proceed by clicking the ‘Add Switches’ button located in the bottom right corner of the page. This action will initiate the process of adding the selected switches to the fabric within NDFC.

Set role and vPC Pair

Verify all the switches are added and confirm the role of each switches are the right role, if not change to the right role at ‘under the Fabric overview page, Switches -> Actions -> Set Role’. If there are switches in vPC pair add the vPC pair relation ship (under the Fabric overview page, Switches -> Actions -> vPC Pairing).

After adding the switches, it’s important to verify that all switches are successfully added to the VXLAN EVPN fabric and that each switch is assigned the correct role. You can do this by navigating to the Fabric Overview page and accessing the Switches section. Here are the steps:

  1. Verify Switch Addition: Check the list of switches on the Fabric Overview page under switches to ensure that all added switches are listed.
  2. Confirm Roles: Review the role assigned to each switch. The roles should be aligned with the intended roles based on your network design. If any switch is assigned the wrong role, you can correct it by navigating to Switches -> Actions -> Set Role.
  3. vPC Pairing: If there are switches need to be configured in a vPC pair relationship, navigate to Switches -> Actions -> vPC Pairing to configure the vPC relationship between switches.

By following these steps, you can verify that all switches are added to the fabric, confirm their roles, and set up any necessary vPC pairings to ensure proper functioning of the VXLAN EVPN fabric.

Verify and deploy

Once you have confirmed the switch roles, vPC pairings, and any other configurations, navigate to the Fabric Overview page of the VXLAN EVPN fabric. From there, locate the ‘Actions’ dropdown menu at the top right corner of the page. Select the option to ‘Recalculate and Deploy’. Executing this action triggers the system to generate the necessary configurations for the VXLAN EVPN fabric based on the design, switch states, roles, and inputs from the fabric creation process.

Reviewing the configuration created is a crucial step before pushing the changes across the fabric. Once you’ve reviewed the configurations and ensured that they align with the desired design specifications, you can proceed to push the configuration updates by clicking the ‘Deploy All’ button.

This action initiates the deployment process, which ensures that all configurations are updated and synchronized across the fabric to reflect the current state of the network. During this process, the fabric undergoes configuration updates as required, and any changes made will be applied uniformly across all switches and devices within the fabric.

Reviewing the status indicators or logs within the NDFC interface is a key step to verify the successful deployment of configurations. Confirmation messages indicating the completion of the deployment process will assure you that everything has been deployed successfully. Conversely, if any error messages or warnings appear during the deployment process, they signify issues that need attention. These messages will be displayed in the interface, allowing you to address them promptly to ensure the smooth operation of the fabric.

By completing the deployment process, you ensure that the fabric is fully configured and ready to be added to MSD (Multi-Site Domain). If you encounter any issues during the process, you can troubleshoot and address them accordingly to ensure smooth operation of the fabric.

The first site VxLAN EVPN fabric – VxLAN-EVPN-65125

Configure Second Site – VxLAN-EVPN-65225

To build the second site of the VXLAN EVPN Multi-Site fabric, follow the same steps outlined during the first site build, ensuring alignment with the site’s specific CRD (Customers Requirement Document), HLD (High-Level Design), and LLD (Low-Level Design). By following these steps and adjusting configurations to match the specific requirements of the second site, you can effectively build out the VXLAN EVPN fabric while ensuring consistency with the overall design. Once both sites with VxLAN EVPN fabric and Multi-Site External Network fabric are established, you can proceed with adding them to the Multi-Site Domain (MSD) as needed.

The second site VxLAN EVPN fabric – VxLAN-EVPN-65225

Configure the ISN (Inter-Site Network)

Configure the First Site Multi-Site External Network – External-65515

For a multi-site VXLAN EVPN network, it’s essential to establish an Inter-Site Network (ISN) to interconnect the various VXLAN EVPN fabrics (sites). To achieve this, we can utilize the Multi-Site External Network template within the Cisco Nexus Dashboard Fabric Controller (NDFC) to create the external network and establish peering between the ISN nodes and VXLAN EVPN fabrics’ Border Gateway (BGW) nodes. Here’s how we can proceed:

  1. Log to NDFC and create a new Multi-Site External fabric
  2. Configure fabric parameters
  3. Add switches
  4. Verify and deploy

Log to NDFC and create a new Multi-Site External Network Fabric

LAN -> Fabrics and Actions -> Create Fabric

When you start the process by clicking “Create a Fabric” in the Cisco Nexus Dashboard Fabric Controller (NDFC), you’ll be guided through the fabric creation wizard. Here’s how it works:

  1. Fabric Name: You’ll first be prompted to provide a name for the fabric. This should be a descriptive name that helps identify the purpose or function of the fabric.
  2. Fabric Type: Next, you’ll need to select the fabric type. For an ISN deployment aimed at interconnecting VXLAN EVPN fabrics across multiple sites, you’ll choose the fabric type “Multi-site External Network.” This fabric type is specifically designed to create a network infrastructure that attaches to Border Gateway (BGW) nodes to facilitate interconnection between VXLAN EVPN fabrics in a multi-site deployment.

Configure fabric parameters

General Parameters

In the general parameters tab of the Multi-Site External Network fabric, the BGP Autonomous System (AS) number and fabric monitor mode are key parameters to configure:

  1. BGP AS Number: This is the Autonomous System number used for the ISN (Inter-Site Network) nodes within the site. The BGP AS number uniquely identifies the AS in which the ISN nodes operate.
  2. Fabric Monitor Mode: This parameter determines how the fabric is managed by NDFC. If fabric monitor mode is selected, the fabric will only be monitored, and no configuration changes will be deployed from the Cisco NDFC. However, since we intend to deploy configurations from NDFC, it’s necessary to disable fabric monitor mode.

The blog will focus on the essential aspects of configuring the Multi-Site External Network fabric within the Cisco Nexus Dashboard Fabric Controller (NDFC), leaving out the advanced, resources, configuration backup, bootstrap, and flow monitor tabs. These tabs will retain their default settings as they are not required for basic multi-site VXLAN EVPN functionality. However, if there is a need to edit parameters on those tabs for specific requirements or advanced configurations, can do so.

Add switches

Once a Multi-Site External Network fabric is created and configured within NDFC, the next step is to add switches to the fabric. Before doing so, ensure that the switches are accessible from NDFC and have their management interface configured for communication with NDFC. Verify that you have the necessary credentials (username and password) to access each switches. This will be required when adding the switches to the fabric in NDFC.

Log to NDFC -> LAN -> Fabrics -> Select the Fabric

Under the Multi-Site External Network Fabric Overview go to Switches -> Actions -> Add Switches

Discover Switches

On the ‘Switch Addition Mechanism’ page, ensure to provide the necessary information such as the seed IP, authentication protocols, and credentials (username and password) for accessing the switches. Finally, submit the information by clicking the ‘Discover Switches’ button. This action will initiate the process of discovering the switches to the Multi-Site External Network fabric within NDFC.

Once the switches are discovered, they will be listed in the discover results section. This list will include details such as the switch name, serial number, IP address, model, version, and status. From this list, select the switches you wish to add to the Multi-Site External Network fabric, in this case the ISN nodes for the first site. After selecting the desired switches, proceed by clicking the ‘Add Switches’ button located in the bottom right corner of the page. This action will initiate the process of adding the selected switches to the fabric within NDFC.

Set role

After adding the switches to the Multi-Site External Network fabric, it’s crucial to ensure that each switch’s role is correctly assigned, particularly designating them as core routers as needed for the ISN deployment.

To adjust roles for switches within the Cisco Nexus Dashboard Fabric Controller (NDFC):

  1. Access the Fabric Overview page of the Multi-Site External Network fabric within the NDFC interface.
  2. Identify the switches that require role adjustment. You can do this by selecting the checkboxes next to the switches’ names.
  3. In the Switches section, locate the actions dropdown menu. Click on it to reveal a list of available actions for the selected switches.
  4. From the actions dropdown menu, opt for the “Set Role” option. This selection enables you to specify or modify the role assigned to each switch within the fabric.
  5. Once you’ve adjusted the roles accordingly, save the changes to apply the updated roles to the selected switches within the fabric.
Interface mode

Ensure that all interfaces connecting ISN nodes to BGW nodes are configured as routed interfaces to facilitate proper routing and connectivity between the Multi-Site External Network and the VXLAN EVPN fabric.

To ensure that interfaces connecting to Border Gateway (BGW) nodes and to other sites are configured as routed interfaces, you can assign them to the int_routed_host policy template.

  1. Navigate to the Fabric Overview page within the Cisco Nexus Dashboard Fabric Controller (NDFC) interface. Then, locate the interfaces section and select the interfaces connecting ISN nodes to BGW nodes.
  2. Under actions, choose the option to edit the interface configuration. This will allow you to modify the settings for the selected interfaces.
  3. Within the edit interface configuration settings, click the current policy choice or no policy selected to choose the int_routed_host policy template. This template is specifically designed for configuring routed interfaces.
  4. After selecting the int_routed_host policy template, save the changes and exit the interface configuration settings.
  5. Once you have applied the int_routed_host policy template to all interfaces connecting ISN nodes to BGW nodes, verify that the interfaces are configured as routed interfaces. You can review the interface configurations to confirm the changes.

Example for configuring interface as a routed port

Fabric overview -> Interfaces -> Select the interface -> Actions -> Edit

Click -> No policy selected -> select int_routed_host

Make sure ‘Enable Interface’ is selected. Avoid specifying specific IP addresses, VRFs, or other detailed configurations for the interfaces at this stage. Instead, leave these fields blank or un-configured. the Multi-Site Domain (MSD) configuration process will automatically assign IP addresses, VRFs, and other necessary configurations from a predefined pool.

After enabling the interfaces without specifying detailed configurations, save any modifications made to the interface settings.

Verify and deploy

Once you have confirmed the switch roles, interface (connecting to BGWs and the other sites) routed mode and any other configurations, navigate to the Fabric Overview page of the Multi-Site External Network fabric. From there, locate the ‘Actions’ dropdown menu at the top right corner of the page. Select the option to ‘Recalculate and Deploy’. Executing this action triggers the system to generate the necessary configurations for the Multi-Site External Network fabric based on the design, switch states, roles, and inputs from the fabric creation process.

Review the configurations and ensure that they align with the desired design specifications and push the configuration updates by clicking the ‘Deploy All’ button. This action initiates the deployment process, which ensures that all configurations are updated and synchronized across the fabric to reflect the current state of the network.

Reviewing the status indicators or logs within the NDFC interface is a key step to verify the successful deployment of configurations. Confirmation messages indicating the completion of the deployment process will assure you that everything has been deployed successfully. Conversely, if any error messages or warnings appear during the deployment process, they signify issues that need attention. These messages will be displayed in the interface, allowing you to address them promptly to ensure the smooth operation of the fabric.

By completing the deployment process, you ensure that the fabric is fully configured and ready to be integrated to MSD (Multi-Site Domain). If you encounter any issues during the process, you can troubleshoot and address them accordingly to ensure smooth operation of the fabric.

Configure the Second Site Multi-Site External Network – External-65525

To build the second Multi-Site External Network fabric for the VXLAN EVPN Multi-Site fabric , follow the same steps outlined during the first Multi-Site External Network fabric build, ensuring alignment with the site’s specific CRD (Customers Requirement Document), HLD (High-Level Design), and LLD (Low-Level Design). By following these steps and adjusting configurations to match the specific requirements of the second Multi-Site External Network fabric, you can effectively build out the ISN network fabric while ensuring consistency with the overall design. Once both sites with VxLAN EVPN fabric and Multi-Site External Network fabric are established, you can proceed with adding them to the Multi-Site Domain (MSD) as needed.

Create MSD (Multi-Site Domain) Fabric

A Multi-Site Domain (MSD) serves as a container for managing multiple member fabrics, which can include VxLAN EVPN fabrics and Multi-Site External Networks. This centralized entity acts as a single point of control for defining overlay networks and Virtual Routing and Forwarding (VRF) instances that are shared across all member VxLAN EVPN fabrics within the domain.

Key features of an MSD include:

  1. Topology View: The topology view of the MSD fabric provides a comprehensive overview of all member fabrics and their interconnections. This allows administrators to visualize the entire network architecture in one consolidated view.
  2. Single Point of Control and deployment Efficiency:: With an MSD, administrators can manage overlay networks and VRFs for all member VxLAN EVPN fabrics from a single interface. This eliminates the need to visit each member fabric’s deployment screen separately for configuration and deployment tasks.

Overall, an MSD provides a unified management framework for orchestrating complex multi-fabric environments, enabling efficient deployment and management of overlay networks and VRFs across member VxLAN EVPN fabrics.

The Steps to create MSD:

  1. Log to NDFC and create a new MSD
  2. Configure MSD fabric parameters
  3. Add Member fabrics
  4. Verify and deploy

Log to NDFC and create a new Multi-Site Domain

LAN -> Fabrics and Actions -> Create Fabric

When you start the process by clicking “Create a Fabric” in the Cisco Nexus Dashboard Fabric Controller (NDFC), you’ll be guided through the fabric creation wizard. Here’s how it works:

  1. Fabric Name: You’ll first be prompted to provide a name for the fabric. This should be a descriptive name that helps identify the purpose or function of the fabric.
  2. Fabric Type: When initiating the creation of a Multi-Site Domain (MSD) within the Cisco Nexus Dashboard Fabric Controller (NDFC), you’ll be prompted to select the fabric type. For an MSD deployment intended to serve as a container for multiple fabrics in a VxLAN EVPN multi-site scenario, choose the fabric type template labeled “VxLAN EVPN Multi-Site.” This fabric type template is specifically tailored to create a fabric container capable of hosting multiple fabrics within a VxLAN EVPN multi-site environment. By selecting this template, you establish the foundation for orchestrating and managing various fabrics, facilitating seamless communication and coordination across the multi-site network infrastructure.

Configure MSD Fabric Parameters

General Parameters

In the General Parameters tab of the Multi-Site Domain (MSD) fabric creation section within the Cisco Nexus Dashboard Fabric Controller (NDFC), several key / releavnt parameters can be updated as needed:

  1. Layer 2 and Layer 3 VxLAN VNI Ranges: Define the ranges for both Layer 2 and Layer 3 VxLAN Virtual Network Identifier (VNI). These ranges determine the pool of VNIs available for allocation to overlay networks within the fabric.
  2. Anycast Gateway MAC: Specify the MAC address to be used for the Distributed Anycast Gateway (DAG).
  3. Loopback ID for the Multi-site VTEP VIP: Assign a loopback ID for the Multi-Site VTEP VIP. This loopback ID serves as the identifier for the Virtual IP address associated with the Multi-Site VTEP.
  4. Retain the default values for the VRF (Virtual Routing and Forwarding), Network, VRF Extension, and Network Extension templates.

Ensure to update fields as needed according to the specifications provided in the design documents to align with the overall network architecture and requirements.

DCI

For the Multi-Site Overlay Inter-Fabric Connection (IFC) deployment method, you have several options:

  1. Manual: This option involves manually creating the Multi-Site Overlay Links between data centers through the Border Gateway (BGW) nodes for establishing EVPN BGP sessions across Border Gateway Routers (BGWs),
  2. Direct_To_BGWs: NDFC will generate the full-mesh EVPN session configuration for all the Border Gateway Routers (BGWs) of the children fabrics. This configuration ensures seamless communication and efficient data exchange between BGWs across different fabric instances within the VxLAN EVPN Multi-site architecture.
  3. Centralized_To_Route_Server:
    • If you opt for a route server-based design, specify the IP addresses of the route servers in the “Multi-Site Route Server List” field. If there are multiple route servers, separate their IP addresses with commas. Additionally, specify the BGP Autonomous System Number (ASN) of the route servers in the “Multi-Site Route Server BGP ASN List” field, separating multiple ASNs with commas.
    • NDFC will set up the Border Gateway Routers (BGWs) with EVPN sessions directed towards each Route-Server (RS). Additionally, if NDFC possesses information about the devices to which the RS IP addresses are connected, and these devices are supported and integrated into a managed fabric, it will configure them.

Ensure to configure the “Multi-Site Underlay IFC Auto Deployment Flag” according to your preference:

  • Check the checkbox to enable auto-configuration, which will automate the configuration process.
  • Uncheck the checkbox for manual configuration, allowing you to configure the connections manually.

For the “Delay Restore Time,” leave it at the default value unless there are specific design requirements. This parameter specifies the convergence time for the Multi-Site underlay and overlay control planes, with a range from 30 seconds to 1000 seconds.

If CloudSec is required for the topology, provide the necessary input in the “Multi-Site CloudSec” section. However, since CloudSec is not used in the blog’s topology, this section may not be relevant for this blog.

Resources

In the Resource section of the Multi-Site Domain (MSD) fabric creation template, you’ll configure the IP address pools for various purposes within the multi-site environment. Here are the IP address pools:

  1. Multi-site VTEP VIP Loopback IP Range: This pool is used to assign loopback IP addresses for the Virtual Tunnel Endpoint (VTEP) VIPs across different sites. These loopback IPs are typically used for cross site VXLAN overlay connectivity and routing.
  2. DCI IP Subnet IP Range and Subnet Mask: This pool is used to provide IP addresses for the Data Center Interconnect (DCI) links connecting ISN nodes with BGW nodes. Specify the range of IP address along with the subnet mask that will be allocated for the DCI links.

By defining these IP address pools in the Resource section, you ensure that there are sufficient addresses available for the VTEP VIPs and DCI links to facilitate seamless communication and connectivity between sites in the multi-site environment.

Once you have confirmed the switch roles, vPC pairings, and any other configurations, navigate to the Fabric Overview page of the VXLAN EVPN fabric

Add Member Fabrics


Once all the parameters for MSD have been configured according to the network requirements, architecture, and design, the next step is to add the VxLAN EVPN and Multi-Site External Network fabrics to the MSD fabric container. This integrates the fabrics into the overarching management framework provided by the MSD fabric, enabling centralized control and monitoring of the entire network infrastructure.

To add a member fabric to the Multi-Site Domain (MSD) fabric, follow these steps:

  1. Navigate to the Multi-Site Domain (MSD) overview page within the Cisco Nexus Dashboard Fabric Controller (NDFC) interface.
  2. Locate the section for child fabrics, which typically lists the individual member fabrics that are already part of the MSD.
  3. In the child fabrics section, find the actions dropdown menu. Click on it to reveal a list of available actions.
  4. Select the option “Move Fabric to MSD” from the dropdown menu. This action will provide a list of fabrics to choose from.
  5. Choose the fabric that you want to add to the MSD from the list provided.

By following these steps, you can efficiently add a member fabric to the Multi-Site Domain, facilitating centralized management and control of multiple fabrics within the MSD framework.

To add all VxLAN EVPN Network fabrics and Multi-Site External Network fabrics to the Multi-Site Domain (MSD), repeat the process outlined earlier for each fabric.

Create links between ISN nodes

Create a link between the ISN nodes in the first site and ISN nodes in the second site per the site connectivity requirement/design guide. Once the link is created, the configuration will be automatically pushed to the ISN nodes on both sites, ensuring that they are properly connected and able to communicate with each other.

To create a link between the Inter-Site Network (ISN) nodes in the first site and ISN nodes in the second site, follow these steps:

  • Navigate to the fabric overview page of the MSD fabric within the Cisco Nexus Dashboard Fabric Controller (NDFC) interface.
  • Locate the section for links, which lists the the links already part of the MSD.
  • Under the Actions menu, select the create option for creating a link between ISN nodes.

Follow the prompts to specify the parameters of the link, including:

  • Link type: Choose the appropriate type of link – Inter-Fabric
  • Link sub-type: Specify sub-type as VRF_LITE.
  • Source and destination Multi-Site External Network fabric: Select the specific external network fabrics associated with the source and destination sites.
  • Specific ISN nodes: Choose the ISN nodes in each site that will be connected by the link.
  • L3 port connecting both ISN nodes: Specify the Layer 3 port or interface that will serve as the connection point between the ISN nodes.
  • Under general parameters:
    • IP address: Assign an IP address to the link interface for routing purposes.
    • Additional configuration details: Provide any other configuration parameters or settings required by your network design.
    • Enable ‘Auto Generate Configuration for Peer’
  • Under Default VRF:
    • Enable ‘Auto Generate Configuration on Default VRF’
    • Enable ‘Auto Generate Configuration for NX-OS / IOS XE Peer on Default VRF’
  • Once you’ve specified all the parameters, proceed to create the link by clicking ‘Save’.

By following these steps, you can establish a link between ISN nodes, with direct connectivity, in different sites within your VXLAN EVPN multi-site network.

  • To ensure that the configuration changes take effect, navigate to the Actions menu on the fabric overview page and select the option to “Recalculate and Deploy.”
  • Click “Deploy All” to initiate the deployment process and push the updated configuration across the fabric.

By following these steps, you can establish connectivity between ISN nodes, as depicted on the topology, in different sites within your VXLAN EVPN multi-site network, facilitating seamless communication and data exchange across the entire network.

Verify and Deploy

Once you have finished adding all the child fabrics to the Multi-Site Domain (MSD), proceed to the Actions menu located on the top right corner of the MSD overview page within the Cisco Nexus Dashboard Fabric Controller (NDFC) interface.

From the Actions menu, select the option for “Recalculate and Deploy.” This action will initiate the process of recalculating and deploying configurations across all member fabrics within the MSD, ensuring consistency and synchronization of configurations across the entire multi-site network.

By performing this step, any changes or updates made to the MSD or its member fabrics will be applied as required, ensuring that the network operates efficiently and in accordance with the desired design specifications.

Topology

Once the configuration is successfully pushed from the Multi-Site Domain (MSD) fabric using the ‘Deploy All’ option in the Cisco Nexus Dashboard Fabric Controller (NDFC), the topology section of the NDFC interface will display the updated topology for the MSD fabric. This topology will reflect the interconnection between the member fabrics, including VxLAN EVPN network fabrics and Multi-Site External Network fabrics.

The topology view provides a visual representation of the network architecture, showing the connectivity between sites, the distribution of switches and devices, and the overall layout of the multi-site network infrastructure. With the configuration deployed, the MSD fabric is now ready for the deployment of overlay networks, VRFs, VLANs, and other network services as needed.

The topology view serves as a centralized management interface for monitoring and managing the MSD fabric, allowing administrators to visualize the network layout, troubleshoot connectivity issues, and make configuration changes as necessary.

Add Overlay Networks from the MSD Fabric

Great news! With the Multi-Site Domain (MSD) configured and ready, you can now proceed to add overlay networks, VRFs and networks, directly from the MSD overview page. This streamlined approach simplifies network management and configuration, allowing you to deploy and manage network resources across multiple sites from a single centralized fabric view.

From the MSD overview page, you’ll have access to a range of options for configuring overlay networks, VRFs, and networks. You can define the desired parameters for each network segment, specify routing and connectivity requirements, and assign resources as needed to support your network architecture.

Create and Add VRF

Before deploying an overlay network, you typically need to create a Virtual Routing and Forwarding (VRF) instance. A VRF provides logical separation of network resources, allowing multiple independent routing instances to coexist within the same physical infrastructure.

Here’s a general outline of the steps to deploy an overlay network by creating a VRF:

1. Create the VRF: Navigate to VRFs on the Cisco Nexus Dashboard Fabric Controller (NDFC) Fabric Overview page of the MSD fabric. Under the Actions menu, select the create option for creating a VRF and provide the necessary parameters, such as the VRF name and any specific configuration options required for your network design.

Key parameters such as the VRF name, VRF ID, and VLAN ID are essential for creating VRF. Other details like the VRF VLAN name, VRF interface descriptions, and VRF description can be provided based on specific design documents and requirements.

Advanced options like route targets, MTU settings, routing tags, route maps, and maximum BGP paths offer additional customization possibilities. In this blog scenario, focusing on the fundamental parameters such as the VRF name, VRF ID, and VLAN ID while keeping other settings as default ensures a straightforward configuration process. This approach lays a solid foundation for the VRF, enabling efficient segmentation and routing within the overlay network.

As the network evolves or specific needs arise, revisiting the VRF configuration to adjust or add advanced settings can be done to accommodate new requirements.

https://deliabtech.com/data-center/cisco-vxlan-evpn-route-leaking-2-ndfc/ – blog post on route leaking leveraging the ‘Route Target’ tab on ‘Create VRF’ page.

2. Attach the VRF to Devices: Once the VRF is created, you’ll need to attach it to the devices within your network topology where the overlay network will be deployed. This step ensures that traffic belonging to the overlay network is routed correctly within the VRF context.

Navigate to VRFs on the Cisco Nexus Dashboard Fabric Controller (NDFC) Fabric Overview page of the MSD fabric. Select the VRF and under the Actions menu, select the Multi-Attach

Choose the devices that require the VRF configuration and proceed to the next step by clicking “Next.”

On the following window, save your configuration changes and then proceed by clicking “Deploy.”


On the subsequent page, review the pending configuration changes that are set to be pushed to the respective network devices. Once you’ve verified that everything is correct, proceed to deploy the configuration by clicking “Deploy All.”

Verify : Once the VRF and associated configurations are in place, it’s important to verify. This may involve checking proper deployment of the VRF in devices, checking routing tables, and troubleshooting any issues that may arise during the testing process.

Create and Add Networks

Once the VRF has been verified and confirmed, the subsequent step involves deploying networks according to the specifications outlined in the provided topology or requirement document. Below is a summary of the network requirements for this blog:


Create networks:
create the networks under the fabric overview page of the MSD fabric. This involves defining the specific network segments, addressing schemes, and any other relevant configurations required for your overlay network deployment within the fabric. These networks are typically designed to facilitate communication between various endpoints or services.

To navigate to Networks on the Cisco Nexus Dashboard Fabric Controller (NDFC) Fabric Overview page of the MSD fabric and create a network, follow these steps:

1. Access the Fabric Overview Page: To access Fabric Overview page of the MSD fabric in the Cisco Nexus Dashboard Fabric Controller (NDFC), log in and navigate from the LAN section to the fabric view. Then, double-click on the MSD fabric.

2. Navigate to Networks: Look for the Networks section on the Fabric Overview page. This section typically provides an overview of the existing networks within your fabric.

3. Select Create Option: Under the Actions menu within the Networks section, select the “Create” option. This action will open a form for creating a new network.

4. Provide Necessary Parameters:

  • Network Name: Enter a descriptive name for the network.
  • VLAN ID: Specify the VLAN ID associated with the network segment.
  • VRF Name: Choose the Virtual Routing and Forwarding (VRF) instance to associate with this network. This determines the routing domain and isolation for the network.
  • Network ID: Define a unique identifier for the network.
  • IP Address: Provide the IP address range or subnet for the network. This defines the address space available to devices within the network segment and DAG (Distributed Anycast Gateway).
  • Specific Configuration Options: Depending on your network design and requirements, you may need to configure additional options you need to provide as necessary.

5. Review and Confirm: Double-check all the provided parameters to ensure accuracy and consistency with your network design requirements.

6. Create the Network: Once you have provided all the necessary parameters and configurations, proceed to create the network by clicking “Create.”

7. Verify: After creating the network, verify its creation by checking the Networks section or relevant network configuration pages. Ensure that the network appears as expected and that its configurations align with your design.

Attach the Networks to Devices: Once the Networks are created, you’ll need to attach them to the devices within your network topology where the overlay networks will be deployed. This step ensures that traffic belonging to the overlay networks are routed correctly within the VRF context of the Multi-Site network.

Navigate to Networks on the Cisco Nexus Dashboard Fabric Controller (NDFC) Fabric Overview page of the MSD fabric, select the networks you want to attach to devices within the MSD fabric. Under the Actions menu, select the Multi-Attach

Choose the devices that require the Networks configuration and proceed to the next step by clicking “Next.”

In the subsequent window, choose the interfaces corresponding to each network on each device per the requirement document. Afterward, click “Next” to advance to the subsequent step.

Make sure the interface policy (int_access_host or int_trunk_host, or int_trunk_classic, … ) is inline with your requirement before you attach on the window below. For this blog lab int_access_host is used on the eth1/3 ports of leaf switches.

Review the summary that outlines the selected networks, devices, network attachments, and device interface associations. Afterwards, choose the “Proceed to Full Switch Deploy” option, and then click “Save” to prompt the NDFC to generate the necessary configurations required by each device for deploying the networks.


Inspect the configurations generated by NDFC by navigating to the pending configuration for each device. After reviewing and ensuring the configurations align with the intended design, click “Deploy All” to push the configurations to the devices within the MSD.

Verify and Test: Once the Networks and associated configurations are in place, it’s important to verify and test the overlay networks to ensure proper connectivity and functionality. This may involve conducting tests to validate end-to-end connectivity, checking routing tables, and troubleshooting any issues that may arise during the testing process.

Verification and Testing

NVE Peering

BG1-Site1# show nve peers
Interface Peer-IP                       State LearnType Uptime   Router-Mac       
--------- ----------------------------  ----- --------- -------- -----------------
nve1      10.10.100.2                   Up    CP        00:10:42 0200.0a0a.6402  -> remote site BGW (VIP - L100)
nve1      10.10.103.1                   Up    CP        00:11:26 521c.a7be.1b08  -> local site L1   (vPC PIP)
nve1      10.10.103.2                   Up    CP        00:11:26 0200.0a0a.6702  -> local site L1-2 (vPC VIP)
nve1      10.10.103.3                   Up    CP        00:11:26 520b.bf92.1b08  -> local site L2   (vPC PIP)
nve1      10.10.103.5                   Up    CP        00:10:46 5217.6b9d.1b08  -> local site BGW  (PIP)
nve1      10.10.104.1                   Up    CP        11:44:42 5202.192c.1b08  -> remote site BGW (PIP)
nve1      10.10.104.2                   Up    CP        11:44:42 5201.87be.1b08  -> remote site BGW (PIP) 

BG1-Site2# show nve peers
Interface Peer-IP                       State LearnType Uptime   Router-Mac       
--------- ----------------------------  ----- --------- -------- -----------------
nve1      10.10.100.1                   Up    CP        00:09:59 0200.0a0a.6401   -> remote site BGW (VIP - L100)
nve1      10.10.103.4                   Up    CP        11:43:14 521f.ba68.1b08   -> remote site BGW (PIP) 
nve1      10.10.103.5                   Up    CP        11:43:14 5217.6b9d.1b08   -> remote site BGW (PIP)
nve1      10.10.104.2                   Up    CP        00:09:02 5201.87be.1b08   -> local site BGW  (PIP)
nve1      10.10.104.3                   Up    CP        00:09:02 520a.765e.1b08   -> local site L2   (vPC PIP)
nve1      10.10.104.4                   Up    CP        00:09:02 521f.a254.1b08   -> local site L2   (vPC PIP)   
nve1      10.10.104.5                   Up    CP        00:09:02 0200.0a0a.6805   -> local site L1-2 (vPC VIP)

Routing

### Site - 65225 route output on one of the leafs 
L1-Site1# sh ip route vrf msd-vrf 
IP Route Table for VRF "msd-vrf"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

10.10.20.0/24, ubest/mbest: 1/0, attached
    *via 10.10.20.1, Vlan20, [0/0], 01:13:30, direct, tag 12345
10.10.20.1/32, ubest/mbest: 1/0, attached
    *via 10.10.20.1, Vlan20, [0/0], 01:13:30, local, tag 12345
10.10.20.10/32, ubest/mbest: 1/0, attached
    *via 10.10.20.10, Vlan20, [190/0], 00:48:17, hmm
10.10.20.20/32, ubest/mbest: 1/0
    *via 10.10.100.1%default, [200/2000], 00:36:51, bgp-65125, internal, tag 652  -> endpoint from remote site
25, segid: 50000 tunnelid: 0xa0a6401 encap: VXLAN 
10.10.40.0/24, ubest/mbest: 1/0, attached
    *via 10.10.40.1, Vlan40, [0/0], 01:13:28, direct, tag 12345
10.10.40.1/32, ubest/mbest: 1/0, attached
    *via 10.10.40.1, Vlan40, [0/0], 01:13:28, local, tag 12345
10.10.40.10/32, ubest/mbest: 1/0, attached
    *via 10.10.40.10, Vlan40, [190/0], 00:03:25, hmm
10.10.40.20/32, ubest/mbest: 1/0
    *via 10.10.100.1%default, [200/2000], 00:05:40, bgp-65125, internal, tag 652   -> endpoint from remote site
25, segid: 50000 tunnelid: 0xa0a6401 encap: VXLAN
 
!truncated!

Site - 65225 route output on one of the leafs    
L2-Site2# sh ip route vrf msd-vrf 
IP Route Table for VRF "msd-vrf"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

10.10.20.0/24, ubest/mbest: 1/0, attached
    *via 10.10.20.1, Vlan20, [0/0], 01:10:33, direct, tag 12345
10.10.20.1/32, ubest/mbest: 1/0, attached
    *via 10.10.20.1, Vlan20, [0/0], 01:10:33, local, tag 12345
10.10.20.10/32, ubest/mbest: 1/0
    *via 10.10.100.2%default, [200/2000], 00:33:53, bgp-65225, internal, tag 651 -> endpoint from remote site
25, segid: 50000 tunnelid: 0xa0a6402 encap: VXLAN 
10.10.20.20/32, ubest/mbest: 1/0, attached
    *via 10.10.20.20, Vlan20, [190/0], 00:36:15, hmm
10.10.40.0/24, ubest/mbest: 1/0, attached
    *via 10.10.40.1, Vlan40, [0/0], 01:10:31, direct, tag 12345
10.10.40.1/32, ubest/mbest: 1/0, attached
    *via 10.10.40.1, Vlan40, [0/0], 01:10:31, local, tag 12345
10.10.40.10/32, ubest/mbest: 1/0
    *via 10.10.100.2%default, [200/2000], 00:00:27, bgp-65225, internal, tag 651  -> endpoint from remote site
25, segid: 50000 tunnelid: 0xa0a6402 encap: VXLAN 
10.10.40.20/32, ubest/mbest: 1/0, attached
    *via 10.10.40.20, Vlan40, [190/0], 00:02:43, hmm
!truncated!

 

Mac Address

### Test systems
### Host in VLAN 20 Site-65125 -  52:54:00:01:3F:44 
### Host in VLAN 40 Site-65125 -  52:54:00:1D:9A:51   
### Host in VLAN 20 Site-65225 -  52:54:00:05:8E:25  
### Host in VLAN 40 Site-65225 -  52:54:00:05:31:1C 

### Site-65125 leaf switch

L1-Site1# show mac address-table 
Legend: 
        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
        age - seconds since last seen,+ - primary entry using vPC Peer-Link,
        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan,
        (NA)- Not Applicable A – ESI Active Path, S – ESI Standby Path
   VLAN     MAC Address      Type      age     Secure NTFY Ports
---------+-----------------+--------+---------+------+----+------------------
*   20     5254.0001.3f44   dynamic  NA         F      F    Eth1/3.            -> MAC address of a local site host
C   20     5254.0005.8e25   dynamic  NA         F      F    nve1(10.10.100.1)  -> MAC address of a remote site host
C   40     5254.0005.311c   dynamic  NA         F      F    nve1(10.10.100.1)  -> MAC address of a remote site host
* 2000     0200.0a0a.6401   static   -         F      F    nve1(10.10.100.1)
* 2000     0200.0a0a.6702   static   -         F      F    Vlan2000
* 2000     520b.bf92.1b08   static   -         F      F    Vlan2000
* 2000     5217.6b9d.1b08   static   -         F      F    nve1(10.10.103.5)
* 2000     521f.ba68.1b08   static   -         F      F    nve1
+    1     5254.001c.c06c   dynamic  NA         F      F    vPC Peer-Link
+   40     5254.001d.9a51   dynamic  NA         F      F    vPC Peer-Link      -> MAC address of a local site host
G    -     0200.0a0a.6702   static   -         F      F    sup-eth1(R)
G    -     2020.0000.00aa   static   -         F      F    sup-eth1(R)
G    -     520b.bf92.1b08   static   -         F      F    sup-eth1(R)
G 3600     520b.bf92.1b08   static   -         F      F    sup-eth1(R)
G   20     520b.bf92.1b08   static   -         F      F    sup-eth1(R)
G   40     520b.bf92.1b08   static   -         F      F    sup-eth1(R)
G 2000     520b.bf92.1b08   static   -         F      F    sup-eth1(R)
G 3600     521c.a7be.1b08   static   -         F      F    vPC Peer-Link(R)
G   20     521c.a7be.1b08   static   -         F      F    vPC Peer-Link(R)
G   40     521c.a7be.1b08   static   -         F      F    vPC Peer-Link(R)
G 2000     521c.a7be.1b08   static   -         F      F    vPC Peer-Link(R)

### Test systems
### Host in VLAN 20 Site-65125 -  52:54:00:01:3F:44 
### Host in VLAN 40 Site-65125 -  52:54:00:1D:9A:51   
### Host in VLAN 20 Site-65225 -  52:54:00:05:8E:25  
### Host in VLAN 40 Site-65225 -  52:54:00:05:31:1C 

### Site-65225 leaf switch

L1-Site2# show mac address-table 
Legend: 
        * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
        age - seconds since last seen,+ - primary entry using vPC Peer-Link,
        (T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan,
        (NA)- Not Applicable A – ESI Active Path, S – ESI Standby Path
   VLAN     MAC Address      Type      age     Secure NTFY Ports
---------+-----------------+--------+---------+------+----+------------------
C   20     5254.0001.3f44   dynamic  NA         F      F    nve1(10.10.100.2)   -> MAC address of a remote site host
*   20     5254.0005.8e25   dynamic  NA         F      F    Eth1/3              -> MAC address of a local site host
C   40     5254.001d.9a51   dynamic  NA         F      F    nve1(10.10.100.2)   -> MAC address of a remote site host
* 2000     0200.0a0a.6402   static   -         F      F    nve1(10.10.100.2)
* 2000     0200.0a0a.6805   static   -         F      F    Vlan2000
* 2000     5201.87be.1b08   static   -         F      F    nve1
* 2000     5202.192c.1b08   static   -         F      F    nve1(10.10.104.1)
* 2000     520a.765e.1b08   static   -         F      F    Vlan2000
+   40     5254.0005.311c   dynamic  NA         F      F    vPC Peer-Link       -> MAC address of a local site host
G    -     0200.0a0a.6805   static   -         F      F    sup-eth1(R)
G    -     2020.0000.00aa   static   -         F      F    sup-eth1(R)
G    -     520a.765e.1b08   static   -         F      F    sup-eth1(R)
G 3600     520a.765e.1b08   static   -         F      F    sup-eth1(R)
G   20     520a.765e.1b08   static   -         F      F    sup-eth1(R)
G   40     520a.765e.1b08   static   -         F      F    sup-eth1(R)
G 2000     520a.765e.1b08   static   -         F      F    sup-eth1(R)
G 3600     521f.a254.1b08   static   -         F      F    vPC Peer-Link(R)
G   20     521f.a254.1b08   static   -         F      F    vPC Peer-Link(R)
G   40     521f.a254.1b08   static   -         F      F    vPC Peer-Link(R)
G 2000     521f.a254.1b08   static   -         F      F    vPC Peer-Link(R)

Connectivity Testing

### Ping from SITE-65125 host in VLAN 20 (10.10.20.10) to the three other hosts (10.10.20.20, 10.10.40.10 & 10.10.40.20)

cisco@2010:~$ ping 10.10.20.20
PING 10.10.20.20 (10.10.20.20): 56 data bytes
64 bytes from 10.10.20.20: seq=0 ttl=64 time=232.386 ms
64 bytes from 10.10.20.20: seq=1 ttl=64 time=64.545 ms
64 bytes from 10.10.20.20: seq=2 ttl=64 time=215.981 ms
64 bytes from 10.10.20.20: seq=3 ttl=64 time=158.089 ms

--- 10.10.20.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 64.545/167.750/232.386 ms
cisco@2010:~$ ping 10.10.40.10
PING 10.10.40.10 (10.10.40.10): 56 data bytes
64 bytes from 10.10.40.10: seq=0 ttl=63 time=65.035 ms
64 bytes from 10.10.40.10: seq=1 ttl=63 time=56.570 ms
64 bytes from 10.10.40.10: seq=2 ttl=63 time=60.875 ms
64 bytes from 10.10.40.10: seq=3 ttl=63 time=60.374 ms

--- 10.10.40.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 56.570/60.713/65.035 ms
cisco@2010:~$ ping 10.10.40.20
PING 10.10.40.20 (10.10.40.20): 56 data bytes
64 bytes from 10.10.40.20: seq=0 ttl=60 time=427.584 ms
64 bytes from 10.10.40.20: seq=1 ttl=60 time=349.736 ms
64 bytes from 10.10.40.20: seq=2 ttl=60 time=268.189 ms
64 bytes from 10.10.40.20: seq=3 ttl=60 time=253.409 ms

--- 10.10.40.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 253.409/324.729/427.584 ms

### Ping from SITE-65125 host in VLAN 40 (10.10.40.10) to the three other hosts (10.10.20.10, 10.10.20.20 & 10.10.40.20)

cisco@4010:~$ ping 10.10.20.10
PING 10.10.20.10 (10.10.20.10): 56 data bytes
64 bytes from 10.10.20.10: seq=0 ttl=63 time=79.991 ms
64 bytes from 10.10.20.10: seq=1 ttl=63 time=44.229 ms
64 bytes from 10.10.20.10: seq=2 ttl=63 time=44.317 ms
64 bytes from 10.10.20.10: seq=3 ttl=63 time=43.639 ms

--- 10.10.20.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 43.639/53.044/79.991 ms
cisco@4010:~$ ping 10.10.20.20
PING 10.10.20.20 (10.10.20.20): 56 data bytes
64 bytes from 10.10.20.20: seq=0 ttl=60 time=171.010 ms
64 bytes from 10.10.20.20: seq=1 ttl=60 time=242.002 ms
64 bytes from 10.10.20.20: seq=2 ttl=60 time=420.765 ms
64 bytes from 10.10.20.20: seq=3 ttl=60 time=132.650 ms

--- 10.10.20.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 132.650/241.606/420.765 ms
cisco@4010:~$ ping 10.10.40.20
PING 10.10.40.20 (10.10.40.20): 56 data bytes
64 bytes from 10.10.40.20: seq=0 ttl=64 time=124.470 ms
64 bytes from 10.10.40.20: seq=1 ttl=64 time=188.870 ms
64 bytes from 10.10.40.20: seq=2 ttl=64 time=160.098 ms
64 bytes from 10.10.40.20: seq=3 ttl=64 time=138.786 ms

--- 10.10.40.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 124.470/153.056/188.870 ms

### Ping from SITE-65225 host in VLAN 20 (10.10.20.20) to the three other hosts (10.10.20.10, 10.10.40.10 & 10.10.40.20)

cisco@2020:~$ ping 10.10.20.10
PING 10.10.20.10 (10.10.20.10): 56 data bytes
64 bytes from 10.10.20.10: seq=0 ttl=64 time=138.823 ms
64 bytes from 10.10.20.10: seq=1 ttl=64 time=197.598 ms
64 bytes from 10.10.20.10: seq=2 ttl=64 time=184.575 ms
64 bytes from 10.10.20.10: seq=3 ttl=64 time=131.962 ms

--- 10.10.20.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 131.962/163.239/197.598 ms
cisco@2020:~$ ping 10.10.40.10
PING 10.10.40.10 (10.10.40.10): 56 data bytes
64 bytes from 10.10.40.10: seq=0 ttl=60 time=209.305 ms
64 bytes from 10.10.40.10: seq=1 ttl=60 time=323.393 ms
64 bytes from 10.10.40.10: seq=2 ttl=60 time=219.362 ms
64 bytes from 10.10.40.10: seq=3 ttl=60 time=125.477 ms

--- 10.10.40.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 125.477/219.384/323.393 ms
cisco@2020:~$ ping 10.10.40.20
PING 10.10.40.20 (10.10.40.20): 56 data bytes
64 bytes from 10.10.40.20: seq=0 ttl=63 time=21.119 ms
64 bytes from 10.10.40.20: seq=1 ttl=63 time=54.645 ms
64 bytes from 10.10.40.20: seq=2 ttl=63 time=58.917 ms
64 bytes from 10.10.40.20: seq=3 ttl=63 time=41.934 ms
--- 10.10.40.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 21.119/42.432/58.917 ms

### Ping from SITE-65225 host in VLAN 40 (10.10.40.20) to the three other hosts (10.10.20.10, 10.10.20.20 & 10.10.40.10)

cisco@4020:~$ ping 10.10.20.10
PING 10.10.20.10 (10.10.20.10): 56 data bytes
64 bytes from 10.10.20.10: seq=0 ttl=60 time=191.507 ms
64 bytes from 10.10.20.10: seq=1 ttl=60 time=189.276 ms
64 bytes from 10.10.20.10: seq=2 ttl=60 time=229.656 ms
64 bytes from 10.10.20.10: seq=3 ttl=60 time=146.317 ms
^C
--- 10.10.20.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 146.317/189.189/229.656 ms
cisco@4020:~$ ping 10.10.20.20
PING 10.10.20.20 (10.10.20.20): 56 data bytes
64 bytes from 10.10.20.20: seq=0 ttl=63 time=60.148 ms
64 bytes from 10.10.20.20: seq=1 ttl=63 time=47.864 ms
64 bytes from 10.10.20.20: seq=2 ttl=63 time=93.845 ms
64 bytes from 10.10.20.20: seq=3 ttl=63 time=82.015 ms
^C
--- 10.10.20.20 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 47.864/70.968/93.845 ms
cisco@4020:~$ ping 10.10.40.10
PING 10.10.40.10 (10.10.40.10): 56 data bytes
64 bytes from 10.10.40.10: seq=0 ttl=64 time=118.985 ms
64 bytes from 10.10.40.10: seq=1 ttl=64 time=142.684 ms
64 bytes from 10.10.40.10: seq=2 ttl=64 time=164.894 ms
64 bytes from 10.10.40.10: seq=3 ttl=64 time=184.637 ms
^C
--- 10.10.40.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 118.985/152.800/184.637 ms

Troubleshooting Commands

o	Sh mac address-table address xxxx.xxxx.xxxx
o	Sh system internal l2fm l2dbg macdb address xxxx.xxxx.xxxx vlan 10
o	Sh sys inter l2fm event-hist deb | in xxxx.xxxx.xxxx
o	Sh ip arp vrf xxxxx
o	Sh forwarding vrf MSD-VRF adjacency 
o	Sh l2route evpn mac evi 20 (vlan-id)
o	Sh l2route evpn mac-ip evi 20 (vlan-id)
o	Sh system internal l2rib event-history mac
o	Sh system internal l2rib event-history mac-ip
o	Sh bgp l2vpn evpn vni-id xxxxx route-type 2
o	Sh bgp l2vpn evpn vni-id xxxxx (vni-id)
o	Sh bgp l2vpn evpn xxxx.xxxx.xxxx
o	Sh bgp internal event-history event | in xxxx.xxxx.xxxx
o	Sh nve multsite dci-links
o	Sh nve interface nve 1 detail
o	Sh nve peers
o	Sh ip route 10.10.40.10/32 vrf xxxxx

3 thoughts on “VXLAN EVPN Multi-Site – NDFC”

  1. Great post, thank you for sharing.

    I did a similar setup, but by hand/CLI. And I would be curious to see the generated configuration of the BGW on both sides using NDFC.
    Can you please share it?

    Thank you.
    Jerome

Leave a Comment

Your email address will not be published. Required fields are marked *