Two Arm Load Balancer with ACI PBR destination in an L3out

Belete Ageze 2xCCIE | CCDE

In a network design for load balancer, it is required for the ingress and egress traffic to pass through the same load balancer except for direct server return (DSR). To steer the return traffic to the same load balancer as incoming use the following deployment options:

  • Load balancer as a default gateway for servers
  • Source Network Address Translation (SNAT)
  • Policy Based Redirect (PBR)

Load Balancer Deployment Options

When inserting a load balancer into a Cisco ACI fabric, it is important to understand the desired traffic flow, the advantage of using the ACI fabric anycast gateway, the benefit of selective traffic redirection and if DSR is required. Insert Load balancers into ACI fabric using one or more of the following deployment options:

  • Two-arm (inline) with LB as a gateway – SNAT or PBR is not required because the load balancer is in the traffic path based on routing.
  • Two-arm (inline) with ACI fabric as a gateway – the load balancer is deployed between two different VRFs (VRF stitching). SNAT or PBR is not required because the load balancer is in the traffic path based on routing.
  • Two-arm with ACI fabric as a gateway and SNAT or PBR – SNAT or PBR is required to make return traffic back to the load balancer.
  • One-arm with ACI fabric as a gateway and SANT or PBR – SNAT or PBR is required to make return traffic back to the load balancer.

“arm” refers to interfaces or VLAN interfaces. “one-arm” and “two-arm” are nothing, but the number of interfaces created/used on the load balancer. In one-arm traffic enters and leaves the load balancer on the same interface. In two-arm design incoming traffic and return traffic use different interfaces.

The post discusses a two-arm load balancer (LB) with the destination in an l3out, specifically F5 BIG IP, design and deployment.

One arm LB with ACI PBR destination in an L3Out

One arm LB with ACI Policy Based Redirect

Two Arm Load Balancer with Fabric as the Gateway

  • More than one interface/ VLAN interface, for incoming traffic and for the return traffic, are used.
  • Virtual IP address should be in the same subnet of physical servers or behind an l3out.
  • For servers default gateway is the ACI fabric.
  • SNAT (Client IP is not retained) or PBR (Client IP is retained) required for symmetric traffic flow. If SNAT or PBR is not used the return traffic will not be redirected to the LB. The traffic will flow back to the source using the routing and bridging of the fabric. The source will discard the traffic since the source IP is different than the destination IP of the original traffic sent by the source.

SNAT vs. PBR

The use of SNAT on the load balancer or PBR on the ACI fabric can make the return traffic back to the load balancer but the use of PBR offers these advantages:

  • With SNAT, the server cannot capture the true client IP unless passed in XFF (X-Forwarded-For) header and the server is able to log.
  • ACI redirects traffic matched a contract without relying on routing or bridging. ACI PBR will selectively redirect only interesting traffics to the load balancer.
  • For PBR, ACI automatically manages VLAN deployment on the ACI fabric and the virtual networks for service node connectivity.
  • ACI automatically connects and disconnects virtual Network Interface Cards (vNICs) for virtual service appliances.
  • ACI provides a more logical view of service insertion between consumer and provider EPGs.
  • ACI can redirect traffic to the service node without the need for the service node to be the default gateway of the servers.

Physical Topology

The ACI physical topology used to test the two-arm load balancer with PBR has two spines, one boarder leaf and two leafs for the service and compute nodes. The F5 BIG-IP VE is setup in an active-standby HA cluster.

Two arm Load Balancer with ACI PBR Physical topology

Logical Topology

The server bridge domain will serve as a gateway both for the servers and the load balancers provider connector. The clients accessing the servers through the VIP can be both outside of the fabric and within the fabric.  Th traffic coming from external network (through the l3out) and internal client EPG to the VIP – 172.16.100.100 will route to the load balancer. The incoming traffic doesn’t require PBR because the ACI routing and bridging forwards the traffic to the VIP. The load balancer then routes the traffic to the servers using the configured method and the provider connector. The server then returns the traffic back to the load balancer using the configured PBR (unidirectional PBR). The load balancer update the source IP and forward back the return traffic to ACI through the consumer connector. The ACI forward back to the client/source.

The PBR uses service graph to insert LB between the client/source EPG (external or internal client) and provider / server EPG. The service graph rendering creates a shadow EPG for the load balancer interface.

Two arm Load Balancer with ACI PBR  logical topology

Provider and Consumer Connectors

The traffic destined to the provider should be sent out via the provider connector of the service device and the traffic destined to the consumer should be sent out via the consumer connector of the service device. Otherwise, traffic could be dropped because there is no zoning-rule to permit traffic from the consumer connector to the provider and the traffic from the provider connector to the consumer.

Requirements

  • The L3Out for the PBR destination must be in either the consumer or provider VRF.
  • L3Out with SVI, routed sub-interface, or routed interface is supported. (Infra L3Out, GOLF L3Out, SDA L3Out, or L3Out using floating SVI for PBR destination is not supported.)
  • Single pod, Multi-Pod, and Remote Leaf are supported. Multi-Site is not supported as of APIC Release 5.2.
  • If the consumer/provider EPG is an L3Out EPG, it must not be under the same L3Out for PBR destinations.
  • If the consumer/provider EPG is an L3Out EPG, it must not be under the service leaf nodes, where the L3Out for PBR destinations resides. If the consumer/provider EPG is a regular EPG—not an L3Out EPG—the consumer, provider, and the L3Out for PBR destinations can be under the same leaf. This consideration is applicable to the case where a consumer/provider EPG communicates with an L3Out EPG for a service device via another service device where PBR destination in an L3Out is enabled.
  • If the service device is in two-arm mode and one of the L3Outs for the PBR destinations learns 0.0.0.0/0 or 0::0 route, both arms of the service device must be connected to the same leaf node or the same vPC pair.
  • Mixing of PBR destinations in an L3 bridge domain and PBR destinations in an L3Out within the same function node in the service graph is not supported.
  • IP SLA tracking is mandatory for the PBR destination in an L3Out for better convergence.
  • For two-arm mode if one of the L3Outs for the PBR destinations learns 0.0.0.0/0 or 0::0 route, both arms of the service device must be connected to the same leaf node or the same vPC pair.

Detail requirements and design considerations are listed on the Cisco document on the link – https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html

Hardware, Platforms, Software and Assumptions for Step-by-Step Configuration

  • ACI – 5.2(1g)
    • Spine – N9K-C9332C
    • Leaf – N9K-C 93180YC-EX
  • F5 BIG-IP VE – v13.0.0(Build 0.0.1645)
  • The basic ACI configuration is outside the scope of this document
  • Tenant, VRF, BD, EPG and l3out are pre-configured on the ACI fabric
  • The F5 BIG-IP VE configuration is outside the scope of this document
  • VLANs, self-IPs, HA, nodes, pools, and virtual servers are pre-configured on the F5 BIG-IP
  • The F5 LB is in an unmanaged mode service graph with PBR
  • The IP addressing setup is as shown below
  • Client/consumer subnet – 10.10.10.0/24
  • External client/consumer subnet – 192.168.10.0/24
  • Server / provider subnet – 10.10.11.0/24
  • Virtual server / VIP– 172.16.100.100/24 
  • Self-IP subnet for the server side arm (same as server subnet)–
    • F1 – 10.10.11.3/24
    • F2 – 10.10.11.4/24
    • Floating – 10.10.11.5/24
  • Self-IP subnet for the client side arm (l3out)–
    • F1 – 172.16 10.3/24
    • F2 – 172.16 10.4/24
    • Floating – 172.16 10.5/24
    • Leaf1 – 172.16.10.1/24
    • Leaf2 – 172.16.10.2/24

Step-by-Step Configuration for One Arm LB PBR

1. Create L4-L7 Device

Tenant > Services > L4-L7 > Devices (right click )

You can add one or more L4-L7 devices. This document will create two F5 LB devices as an active/standby high availability cluster.

Two arm Load Balancer with ACI PBR config

  • Enter the name for the device.
  • Select the service type – ADC for load balancer.
  • Select the device Type – Physical or Virtual.
  • Choose VMM domain or physical domain depending on the device type selection.
  • Click the + to add devices and Cluster interfaces.

Two arm Load Balancer with ACI PBR config

2. Create Service Graph Template

The Layer 4 – 7 Service Graph is a feature in Cisco ACI to insert Layer 4 – 7 service devices such as a firewall, load balancer, and IPS between the consumer and provider EPGs.

Tenant > Services > L4-L7 > Service Graph Template (right click)

Two arm Load Balancer with ACI PBR config

  • Pull the device created on step above to the consumer/provider block.
  • Give a service graph name.
  • Choose between one-arm and two-arm.
  • Select route redirect.
  • If a bidirectional contract is needed (for F5 pools and nodes health monitoring) between the server EPG and the Self-IP, the Direct Connect option should be set as true on the service graph template. Under the service graph template created go to policy tap then connections and change the Direct Connect to true for both connectors.

Two arm Load Balancer with ACI PBR config

3. Create L4-L7 Policy Based Redirect (PBR) policy

Configure the PBR node IP address, IP SLA monitoring and redirect health group. Starting with APIC Release 5.2, MAC configuration is not mandatory for L3 PBR if IP-SLA tracking is enabled. MAC is not configured since this document is based on 5.2(1g).

Tenant > Policies > L4-L7 Policy-Based Redirect

  • Provide a name for the Policy-Based Redirect.
  • Create IP SLA Monitoring policy.
  • Click + sign to create the L3 destination.
  • Provide the IP address for L3 destination (the floating Self-IP).
  • Provide the MAC address of the L3 destination (can auto discovered if IP SLA monitoring policy is defined for release 5.2 and later).
  • Create a redirect Health Group.

4. Apply the L4-L7 Service Graph Template to EPGs

Apply the L4-L7 service graph template to EPGs using the ‘Apply L4-L7 Service Graph Template’. Manually creating a device selection policy and applying the service graph to the contract can achieve the same.

Tenant > Services > L4-L7 > Service Graph Template > Select the right Service Graph Template > right click > Apply L4-L7 Service Graph Template.

  • Click ‘apply L4-L7 Service Graph Template’.
  • Select between EPG and ESG.
  • Choose the consumer / provider EPGs.
  • Select an existing contract or new contract.
  • On next page select ‘General’ connector type.
  • Select the server BD.
  • This wizard will create the Device Selection Policy.
  • Confirm the consumer and provider connector have the right network associated, the right L4-L7 Policy-Based Redirect and L3 destination is selected only on the provider side (since this is a unidirectional PBR for F5).

Tenant > Services > L4-L7 > Device Selection Policies > select the policy under consideration

5. Deployed Graph Instance

A deployed graph instance show-up under ‘Tenant > Services > L4-L7 > Deployed Graph Instances’ after the above steps successful completion.

Two arm Load Balancer with ACI PBR config

6. Verification

6.1 ‘show service redir info’

The ‘show service redir info’ output provides info on the destination group, the list of destinations and the health group status.

6.2 Traffic Flow and Contract – Internal Clients

Two arm Load Balancer with ACI PBR traffic flow

The output of ‘show zoning-rule’ below explains the contract and traffic flow of internal client accessing servers behind F5 LB VIP on port 22.

As shown on the traffic flow representation above the first flow is from client to ACI fabric and then to the F5 LB. So ACI needs contract to allow communication between the client EPG (32771) and the F5 external EPG (16388). The red annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

The next flow is from the F5 service EPG (49166) to the backend servers (16386). The green annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

Then the server (16386) replies back to the client (32771). The blue annotation on the ‘show zoning-rule’ output highlights the contract for this communication. This is a redirect contract since the server to client communication uses unidirectional PBR.

The final step on the traffic flow is from F5 LB external EPG (16388) to the client EPG (32771). The pink annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

The yellow annotated zoning rule is due to the Direct Connect in step 2 above.

6.3 Traffic Flow and Contract- External Clients

Two arm Load Balancer with ACI PBR traffic flow external client

The output of the ‘show zoning-rule’ below explains the contract and traffic flow of external client accessing servers behind F5 LB VIP on port 22.

As shown on the traffic flow representation above the first flow is from external client to ACI fabric and then to the F5 LB. So ACI needs contract to allow communication between the external client EPG (49159) and the F5 external EPG (16388). The red annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

The next flow is from the F5 service EPG (49166) to the server pool (16386). The green annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

Then the server (16386) replies back to the client (49159). The blue annotation on the ‘show zoning-rule’ output highlights the contract for this communication. This is a redirect contract since the server to client communication uses unidirectional PBR.

The final step on the traffic flow id from F5 LB external EPG (16388) to the client EPG (49159). The pink annotation on the ‘show zoning-rule’ output highlights the contract for this communication.

The yellow annotated zoning rule is due to the Direct Connect in step 2 above.

6.4 Access to the VIP from the Internal Consumer (Client) Computer

The SSH from internal client to the VIP (176.16.100.100) works as expected. The LB method also works as the output shows different backend server (round robin) for each SSH try.

Two arm Load Balancer with ACI PBR test result

6.5 Access to the VIP from the External Consumer (Client) Computer

The SSH from external client to the VIP (10.10.11.100) works as expected. The LB method also works as the output shows different backend server (round robin) for each SSH try.


Core-router# ssh cisco@172.16.100.100 vrf F5PBR source-ip 192.168.10.10
Outbound-ReKey for 172.16.100.100:22
Inbound-ReKey for 172.16.100.100:22
cisco@172.16.100.100's password: 
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-100-generic x86_64)

Last login: Fri Sep 16 23:50:49 2022 from 192.168.10.10
cisco@Web-1:~$ 

Core-router# ssh cisco@172.16.100.100 vrf F5PBR source-ip 192.168.10.10
Outbound-ReKey for 172.16.100.100:22
Inbound-ReKey for 172.16.100.100:22
cisco@172.16.100.100's password: 
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-100-generic x86_64)

Last login: Fri Sep 16 23:51:07 2022 from 192.168.10.10
cisco@Web-2:~$ 

Core-router# ssh cisco@172.16.100.100 vrf F5PBR source-ip 192.168.10.10
Outbound-ReKey for 172.16.100.100:22
Inbound-ReKey for 172.16.100.100:22
cisco@172.16.100.100's password: 
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-100-generic x86_64)

Last login: Fri Sep 16 23:51:16 2022 from 192.168.10.10
cisco@web-3:~$ 

7. Reference

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-743890.html

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html

Leave a Comment

Your email address will not be published. Required fields are marked *