Wired Intelligent Edge

last person joined: 2 days ago 

Bring performance and reliability to your network with the HPE Aruba Networking Core, Aggregation, and Access layer switches. Discuss the latest features and functionality of your switching devices, and find ways to improve security across your network to bring together a mobile-first solution
Expand all | Collapse all

Inter-connecting 4 x 8320

This thread has been viewed 27 times
  • 1.  Inter-connecting 4 x 8320

    Posted Nov 05, 2018 01:11 AM

    I am an HPE partner tasked with a 2 site, 4 x 8320 design/deployment for a I customer. Neither myself or HPE locally were involved in the sale so what we have ended up with isn't quite what we would have preferred. I have engaged HPE locally to answer the below question.

     

    Short version is that we have 4 x 8320 and a customer who is expecting a lot of SFP+ ports in each site with cross-site layer 3 redundancy. There are 6 single mode pairs available for cross site connectivity.

     

    I am already familiar with how to configure VSX in a single-site scenario using multi-chassis LAG (VSX LAG) to an edge switch, including the keepalive requirements, and I have read the 10.01 VSX guide multiple times. I have some questions around some specific scenarios we have discussed.

     

    Scenario 1: Create 1 VSX per ste

    - Is this a viable configuration (VRRP on one member in each VSX only)

    - Are the links between set up as LAG or multichassis LAG? Will this need to be one LAG or two? 

     


    scen1.PNGScenario 2

    In this scenario I would run a single cross-site VSX pair, and treat the other switches as edge devices only. This means I could use active gateway or VRRP for layer 3 HA... 

     

    scen2.PNG

     

    ... but  ... I now have the issue that my hosts in each site need to be dual connected for path redundancy (there is not enough fiber to cross-connect between sites), and I would be connecting to one VSX member and one edge switch with my NIC teams. The hosts do not require 802.3ad/LACP. They will be vSphere hosts with "Route Based on Originating Virtual Port" configured on each of their NIC teams.

     

    I think this one will work, but it it's a bit "clumsy" and possibly opens itself up to human error at some later stage when I'm not around to ensure the rules are followed.

     

    Note that all switches cascaded from either site are single path only.

     

    Any commentary on this one would be appreciated.


    scensp.PNG

    Scenario 3

    This one is a bit of a hybrid of the two above. I might be able to reclaim some SFP+ ports on a 5406 edge switch in one of the two sites. This would let me solve the host pathing problem at one site, while still having it at the other. Site 2 is for backup and DR, so if necessary we can consider running network teams as active/standby on this site.

     

    scn5.PNG

     

     

     

    Scenario 4

    Failback scenario is STP + VRRP something like this. I know how to do this - but it's the last resort... and in general would be a disappointing result given the technology available.scn4.PNG

     

     



  • 2.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted Nov 05, 2018 03:13 AM

    Hi burgess,

     

    From my first read, I would go with scenario 1. It is the cleanest one. But I'm unsure as well, on how to configure l2 redundancy between the two sites.

    just to get a clear understanding, the interconnects between the two sites are l2? 

     

    BR

    Florian



  • 3.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 04:39 AM

    Thanks Florian. Yes - intersite is Layer 2.



  • 4.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 03:42 AM

    I would go for scenario 1 as well. I am not sure about the VRRP configuration, but purely looking at it theoretically, I would say it should work. VRRP should be able to work between two different members of different VSX clusters. VRRP should also support 1+N redundancy, so you should be able to add all 4 8320 to the VRRP instance.

     

    I configured the interconnect between 2 VXS clusters once in an L2 scenario. I connected the two VSX clusters by using MC-LAG configuration on every cluster. 



  • 5.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 03:51 AM

    Hi burgess,

     

    I'm with Florian on this, the scenario 1 seems a lot cleaner.

     

    You can create a single VSX LAG between sites with many links. If the bandwidth between sites is not that high, just 2 or 3 links would be more than enough. Note that all traffic that needs to be routed would go to the site where the VRRP is active.

     

    This is my recommendation though:

     

    Have you thought about having active gateway on just a pair on one site? Running VRRP between sites opens the possibility of a splitbrain. If you run AGW on a single site, you would only lose connectivity if both members of that VSX are actually shutdown.

     

    If you need to upgrade one pair, that shouldn't be an issue at all. During the upgrade, while one member reboots the other one would give L2 access and L3 connectivity while the other one reboots.

     

     

    About scenario 2, I wouldn't recommend it, how would the vSphere hosts know which link is the "active" one? Also, you are tying your hand on not having LACP on near future deployments, you would not be able to give redundancy with LACP for an "access" switch on a single site.

     

    Regards!

    Aarón



  • 6.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted Nov 05, 2018 04:01 AM

    Please use scenario 1 which is the best from a topology viewpoint.

    the 4 interconnect-links are part of a single VSX LAG which simplifies a lot routing. Use the same active-gateway vIP/vMAC on the 4x 8320:

    the benefit is that in case of ESXi maintenance, you can move VMs without loosing MAC of default-gateway. You may see duplicates for ARP responses (which is not a big deal).

     



  • 7.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted Nov 05, 2018 05:03 AM

    to have the gateway always on the local VSX, would it make sense to play with ACL's on the interconnect links and block the ARP entries for the active gateway? So Clients on each site will only get arp responses from the local VSX. 



  • 8.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 04:43 AM

    Thanks. I should have pointed out that scenario 1 is my preferred option but I don't know if it is viable so I need to consider the alternatives. I'm looking to determine if this option will actually work and how to configure it. Specifically if VRRP can be configured on a single member and whether I need to configure the inter-link as LAG or VSX-LAG.

     

    Note that the VSX documentation states the following:

    -  active gateway and VRRP are mutually exclusive

    - VRRP is limited to 2 members on CX-OS (8320)

     

    Scenario 2 is viable only because active gateway would not be used.



  • 9.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 04:46 AM

    Also Aaron - cross site layer 3 redundancy is the core requirement here. Management of split brain is not a concern (the design already includes this; I just didn't show it here).



  • 10.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 04:49 AM

    Then you should follow Vincents solution. Two VSX pairs interconnected with a single VSX LAG and active gateway configured on all 8320s sharing virtual mac and virtual IPs.

     

     



  • 11.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 05:08 AM

    @fefa2k wrote:

    Then you should follow Vincents solution. Two VSX pairs interconnected with a single VSX LAG and active gateway configured on all 8320s sharing virtual mac and virtual IPs.

     

     


    How would two VSX pairs share an active gateway configuration? 



  • 12.  RE: Inter-connecting 4 x 8320
    Best Answer

    EMPLOYEE
    Posted Nov 05, 2018 05:28 AM

    Reminder: active-gateway is "just" a way to acheive an ARP response for downstream clients. There is no protocol. So having 2 or 3 or 4 devices

    sharing the same VIP/VMAC does not have any downside beside multiple ARP responses. You may think or not filtering these ARP between VSX pairs. I'm not sure it is worth the effort considering the very little amount of traffic for ARP responses.

    real IP of the same SVI extended between DCs must be of course different.

    Say VSX pair 1 with CX1 & CX2. VSX pair 2 with CX3 and CX4.

    Consider interface vlan 10

    same active-gateway configuration on CX1, CX2, CX3, CX4.

    Real IP on CX1: a.b.c.2, on CX2: a.b.c.3, on CX3: a.b.c.4, on CX4: a.b.c.5

    Note that it provides active-active DC for outbound traffic (this is fine for DR/backup).



  • 13.  RE: Inter-connecting 4 x 8320

    Posted Nov 05, 2018 06:45 AM

    Thanks Vincent. This is what I was looking for. The VSX guide does not read this way. I'll reply again tomorrow evening after some testing.

     

    To summarise ...

     

    - Interconnect between VSX clusters is VSX LAG (multi-chassis lag) on both sides

    - Identical active gateway config on all four switches

    - acl should not be necessary on the interconnect

     


    @vincent.giles wrote:

    Reminder: active-gateway is "just" a way to acheive an ARP response for downstream clients. There is no protocol. So having 2 or 3 or 4 devices

    sharing the same VIP/VMAC does not have any downside beside multiple ARP responses. You may think or not filtering these ARP between VSX pairs. I'm not sure it is worth the effort considering the very little amount of traffic for ARP responses.

    real IP of the same SVI extended between DCs must be of course different.

    Say VSX pair 1 with CX1 & CX2. VSX pair 2 with CX3 and CX4.

    Consider interface vlan 10

    same active-gateway configuration on CX1, CX2, CX3, CX4.

    Real IP on CX1: a.b.c.2, on CX2: a.b.c.3, on CX3: a.b.c.4, on CX4: a.b.c.5

    Note that it provides active-active DC for outbound traffic (this is fine for DR/backup).


     

     



  • 14.  RE: Inter-connecting 4 x 8320

    Posted Nov 06, 2018 04:30 AM

    Just want to thank everyone for their input here.

     

    Originally I thought I was going to be forced in to one of the workaround scnearios I dicussed here because I was told by HPE locally that I could not have the same active gateway on 2 VSX pairs. Vincent set me on the right path and then about 9 hours later I received an Email with this from HPE global.

     

    Apparently the below will be included in the 10.02 VSX guide.

     

    Capture.PNG



  • 15.  RE: Inter-connecting 4 x 8320

    Posted Nov 06, 2018 04:40 AM

    Hey burgess,

     

    Vincent can correct me if I'm wrong but I think you can already do that on version 10.01. As it was stated before, AGW has no protocol, as soon as a packet arrives to an AGW of any switch it will be forwarded so you don't need to wait for 10.02 release.

     

    Cheers,

     

    Aarón



  • 16.  RE: Inter-connecting 4 x 8320

    Posted Nov 06, 2018 04:42 AM

    @fefa2k wrote:

    Hey burgess,

     

    Vincent can correct me if I'm wrong but I think you can already do that on version 10.01. As it was stated before, AGW has no protocol, as soon as a packet arrives to an AGW of any switch it will be forwarded so you don't need to wait for 10.02 release.

     

    Cheers,

     

    Aarón


    Yeah - I configured it today on 10.01 and it is all working well. What I mean is that HPE have told me that this diagram will be included in the 10.02 version of the doc.



  • 17.  RE: Inter-connecting 4 x 8320

    Posted Dec 06, 2018 04:41 AM

    Gents, awsome topic! i'm implementing option 1 at the moment.

    but does someone know if there is a workaround for the "Maximum 16 VMACs can be configured" limitation?

     



  • 18.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted Dec 06, 2018 05:57 AM

    The 16 VMACs is really to address very niche cases like dual-homed L3 subnet for servers for instance (having one default-route on one NIC and a specific set of route on the second NIC).

    You recommend to use a unique VMAC value that is very intuitive for the network support team to identify: something like 00:00:00:00:01:01 for instance. Associated IPV4 VMAC and IPV6 VMAC must be different (in case you use v6 as well).

    Remember this VMAC is always local to the SVI and no traffic is sourced from that VMAC besides ARP response and periodic hello broadcast to refresh MAC table of peer switches.



  • 19.  RE: Inter-connecting 4 x 8320

    Posted Dec 06, 2018 05:14 PM

    I think what Vincent is saying is that one VMAC for multiple active-gateway interfaces is fine ... other than in the dual homed situation he mentioned.

     

    For my implementation I used the same VMAC for 40+ active-gateway interfaces, but used a different VMAC for anything on the same L2 interface as a routed firewall interface. We have some systems which need to be dual homed into the "LAN" and also with static routes to the DMZ. 



  • 20.  RE: Inter-connecting 4 x 8320

    Posted Dec 16, 2018 12:31 AM

    I would also go with option 1 with one caviat. VSX failover is close to hitless (<0.5 seconds - https://community.arubanetworks.com/t5/Wired-Intelligent-Edge-Campus/8320-VSF/m-p/441062/highlight/true#M3552), but does pause all flows momentarily. Depending on what is connecting to it this may or may not be an issue.



  • 21.  RE: Inter-connecting 4 x 8320

    Posted Jan 10, 2019 03:36 PM

    I too am about to set something up similar to this. I was thinking about using OSPF/ECMP with strict routed ports between the switches. I was going to assign those ports a static IP, add it into OSPF routing, and rely on ECMP to handle the balancing. Which in turn should remove the need for LAGs.. Thoughts?



  • 22.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted Jan 14, 2019 08:32 AM

    If you need same subnet access among the 4 switches you need to carry the associated VLANs between the 2 DCs through the LAG to extend these VLANs.



  • 23.  RE: Inter-connecting 4 x 8320

    Posted May 09, 2019 04:19 AM

    And should you include or exclude these data vlans from the ISL between the VSX pairs?



  • 24.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted May 09, 2019 04:51 AM

    They have to be inside ISL.



  • 25.  RE: Inter-connecting 4 x 8320

    Posted May 09, 2019 07:04 AM

    Thank Vincent, 

     

    We are currently in a support case, where there is some confusion, so to be 100% clear:

     

    1. You need to configure 1 MC-LAG on each VSX?

    2. Tag all the data vlan's that cross the DC on both MC-LAG, and on both ISL's?

    2019-05-09 13_00_32-EM DJ - Netwerk fk - Visio Professional.jpg



  • 26.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted May 09, 2019 09:33 AM

    I'm reaching out the support to discuss this further with them.



  • 27.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted May 09, 2019 10:17 AM

    Let me follow-up off0line. The topology is valid but the previous one you shared in your email with support is not, so better through email with all the contributors.



  • 28.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted May 09, 2019 10:18 AM

    Let me follow-up off-line. The topology is valid but the previous one you shared in your email with support is not, so better through email with all the contributors.



  • 29.  RE: Inter-connecting 4 x 8320

    Posted May 17, 2020 04:54 AM

    Could your team update this thread with methodology of how to setup ospf, backup routes, etc... while implementing the DCI - VSX-LAG solution.   This thread made it very clear how to setup Active Gateway but seems to have gone offline when discussing routing.  Thanks



  • 30.  RE: Inter-connecting 4 x 8320

    MVP GURU
    Posted May 17, 2020 08:31 AM

    @nvadekar wrote: Could your team update this thread with methodology of how to setup ospf, backup routes, etc... while implementing the DCI - VSX-LAG solution.

    Hi, the original purpose of this thread was not to discuss specifically about OSPF or DCI approaches so I strongly suggest you to open a whole new thread to explain what is your scenario and to describe what are your issues.


    @nvadekar wrote: This thread made it very clear how to setup Active Gateway but seems to have gone offline when discussing routing.

    See above.



  • 31.  RE: Inter-connecting 4 x 8320

    Posted May 17, 2020 08:40 AM

    ok, understood.  to start, can you let me know where there is further information about DCI VSX-LAG documentation, earlier in this thread there was a PPTX shared that it was suggested would be included in 10.2 docs, i checked the 10.4 docs and did not see it.   I have not been able to find any vrd or arubapedia documentation that explains best practice or reference material about this type of solution.  I would like to explore reference models, best practices about routing in this type of DCI.

     

    thank you kindly



  • 32.  RE: Inter-connecting 4 x 8320

    Posted May 17, 2020 08:50 AM

    I thought I would mention that this threads implementation is considered a DCI solution by the DCI courseware.



  • 33.  RE: Inter-connecting 4 x 8320

    EMPLOYEE
    Posted May 18, 2020 03:59 AM

    Hi, 

    VSX best practice (read appendix F) is available on asp and hpe:

    https://support.hpe.com/hpsc/doc/public/display?docId=a00094242en_us

    It is also on AFP if you're a partner.

     

    Regarding routing, the topic could be included, but would require to cover multiple aspects leading to a long paper:

    - routing protocols (ospf, bgp)

    - active/passive DC pair or active/active DC pair for egress traffic: easy part

    - active/passive or active/active for return traffic (ingress traffic): the complex part due to traffic hair-pinning between DC. How to manage this would be obviously outside of the scope of VSX best practice.

    Nevertheless here are some options:

    - use global traffic load-balancer

    - /32 advertizement based on VM location (DC1 or DC2) : we have an Aruba solution for this (NAE).

     

    I suggest to contact your local Aruba contqct, as this topic requires more than few messages in a forum thread, to analyze your requirements.

     

    Regards,