Wired Intelligent Edge (Campus Switching and Routing)

Reply
Occasional Contributor I

Inter-connecting 4 x 8320

I am an HPE partner tasked with a 2 site, 4 x 8320 design/deployment for a I customer. Neither myself or HPE locally were involved in the sale so what we have ended up with isn't quite what we would have preferred. I have engaged HPE locally to answer the below question.

 

Short version is that we have 4 x 8320 and a customer who is expecting a lot of SFP+ ports in each site with cross-site layer 3 redundancy. There are 6 single mode pairs available for cross site connectivity.

 

I am already familiar with how to configure VSX in a single-site scenario using multi-chassis LAG (VSX LAG) to an edge switch, including the keepalive requirements, and I have read the 10.01 VSX guide multiple times. I have some questions around some specific scenarios we have discussed.

 

Scenario 1: Create 1 VSX per ste

- Is this a viable configuration (VRRP on one member in each VSX only)

- Are the links between set up as LAG or multichassis LAG? Will this need to be one LAG or two? 

 


scen1.PNGScenario 2

In this scenario I would run a single cross-site VSX pair, and treat the other switches as edge devices only. This means I could use active gateway or VRRP for layer 3 HA... 

 

scen2.PNG

 

... but  ... I now have the issue that my hosts in each site need to be dual connected for path redundancy (there is not enough fiber to cross-connect between sites), and I would be connecting to one VSX member and one edge switch with my NIC teams. The hosts do not require 802.3ad/LACP. They will be vSphere hosts with "Route Based on Originating Virtual Port" configured on each of their NIC teams.

 

I think this one will work, but it it's a bit "clumsy" and possibly opens itself up to human error at some later stage when I'm not around to ensure the rules are followed.

 

Note that all switches cascaded from either site are single path only.

 

Any commentary on this one would be appreciated.


scensp.PNG

Scenario 3

This one is a bit of a hybrid of the two above. I might be able to reclaim some SFP+ ports on a 5406 edge switch in one of the two sites. This would let me solve the host pathing problem at one site, while still having it at the other. Site 2 is for backup and DR, so if necessary we can consider running network teams as active/standby on this site.

 

scn5.PNG

 

 

 

Scenario 4

Failback scenario is STP + VRRP something like this. I know how to do this - but it's the last resort... and in general would be a disappointing result given the technology available.scn4.PNG

 

 

Aruba Employee

Re: Inter-connecting 4 x 8320

Hi burgess,

 

From my first read, I would go with scenario 1. It is the cleanest one. But I'm unsure as well, on how to configure l2 redundancy between the two sites.

just to get a clear understanding, the interconnects between the two sites are l2? 

 

BR

Florian


visit our Youtube Channel:
https://www.youtube.com/channel/UCFJCnuXFGfEbwEzfcgU_ERQ/featured
Please visit my personal blog as well:
https://www.flomain.de
Guest Blogger

Re: Inter-connecting 4 x 8320

I would go for scenario 1 as well. I am not sure about the VRRP configuration, but purely looking at it theoretically, I would say it should work. VRRP should be able to work between two different members of different VSX clusters. VRRP should also support 1+N redundancy, so you should be able to add all 4 8320 to the VRRP instance.

 

I configured the interconnect between 2 VXS clusters once in an L2 scenario. I connected the two VSX clusters by using MC-LAG configuration on every cluster. 

@rene_booches | AMFX #26, ACMX #438, ACCX #725, ACDX #760, CCNP R&S, CEH | Co-owner/Solution Specialist@4IP / blog owner@booches.nl
Contributor I

Re: Inter-connecting 4 x 8320

Hi burgess,

 

I'm with Florian on this, the scenario 1 seems a lot cleaner.

 

You can create a single VSX LAG between sites with many links. If the bandwidth between sites is not that high, just 2 or 3 links would be more than enough. Note that all traffic that needs to be routed would go to the site where the VRRP is active.

 

This is my recommendation though:

 

Have you thought about having active gateway on just a pair on one site? Running VRRP between sites opens the possibility of a splitbrain. If you run AGW on a single site, you would only lose connectivity if both members of that VSX are actually shutdown.

 

If you need to upgrade one pair, that shouldn't be an issue at all. During the upgrade, while one member reboots the other one would give L2 access and L3 connectivity while the other one reboots.

 

 

About scenario 2, I wouldn't recommend it, how would the vSphere hosts know which link is the "active" one? Also, you are tying your hand on not having LACP on near future deployments, you would not be able to give redundancy with LACP for an "access" switch on a single site.

 

Regards!

Aarón

Re: Inter-connecting 4 x 8320

Please use scenario 1 which is the best from a topology viewpoint.

the 4 interconnect-links are part of a single VSX LAG which simplifies a lot routing. Use the same active-gateway vIP/vMAC on the 4x 8320:

the benefit is that in case of ESXi maintenance, you can move VMs without loosing MAC of default-gateway. You may see duplicates for ARP responses (which is not a big deal).

 

Occasional Contributor I

Re: Inter-connecting 4 x 8320

Thanks Florian. Yes - intersite is Layer 2.

Highlighted
Occasional Contributor I

Re: Inter-connecting 4 x 8320

Thanks. I should have pointed out that scenario 1 is my preferred option but I don't know if it is viable so I need to consider the alternatives. I'm looking to determine if this option will actually work and how to configure it. Specifically if VRRP can be configured on a single member and whether I need to configure the inter-link as LAG or VSX-LAG.

 

Note that the VSX documentation states the following:

-  active gateway and VRRP are mutually exclusive

- VRRP is limited to 2 members on CX-OS (8320)

 

Scenario 2 is viable only because active gateway would not be used.

Occasional Contributor I

Re: Inter-connecting 4 x 8320

Also Aaron - cross site layer 3 redundancy is the core requirement here. Management of split brain is not a concern (the design already includes this; I just didn't show it here).

Contributor I

Re: Inter-connecting 4 x 8320

Then you should follow Vincents solution. Two VSX pairs interconnected with a single VSX LAG and active gateway configured on all 8320s sharing virtual mac and virtual IPs.

 

 

Aruba Employee

Re: Inter-connecting 4 x 8320

to have the gateway always on the local VSX, would it make sense to play with ACL's on the interconnect links and block the ARP entries for the active gateway? So Clients on each site will only get arp responses from the local VSX. 


visit our Youtube Channel:
https://www.youtube.com/channel/UCFJCnuXFGfEbwEzfcgU_ERQ/featured
Please visit my personal blog as well:
https://www.flomain.de
Search Airheads
cancel
Showing results for 
Search instead for 
Did you mean: