Wired Intelligent Edge

last person joined: 19 hours ago 

Bring performance and reliability to your network with the HPE Aruba Networking Core, Aggregation, and Access layer switches. Discuss the latest features and functionality of your switching devices, and find ways to improve security across your network to bring together a mobile-first solution
Expand all | Collapse all

Aruba 5400R - VSF confusion

This thread has been viewed 3 times
  • 1.  Aruba 5400R - VSF confusion

    Posted Jul 06, 2020 10:47 AM
      |   view attached

    Hi Experts, 

     

    I have a confusion regarding vsf stacking. I am doing it on 5400R. Snap is also attached. Here is the scenario

     

    1) I have 2 x 5412R connected via 10G copper cable (8 port10GT module)

    2) I configured the vsf using below 2 commands

     vsf member 1 link 1 A8

     vsf enable domain 2

    3) The VSF between the 2 switches formed perfectly and i am seeing 1/A1 and 2/A1 ports etc

    4) I connected 2 servers (as show in the attached snap). Server1 is connected physically to SW1 and Server2 is connected physically to SW2. 

    5) Both servers are in same Vlan 10

    6) Question is, if i ping from server1 to server2, will it ping? will the communication will cross the VSF link? coz right now although i am not at site and there are some ambiguities (on server end) but ping is not getting through. 

     

    Can someone confirm if L2/L3 communication crosses over VSF link like in my case?



  • 2.  RE: Aruba 5400R - VSF confusion

    MVP GURU
    Posted Jul 06, 2020 11:24 AM

    Hi! host "A" traffic is going to cross the VSF Link to host "B" if host "A" is single leg connected to VSF Member 1 and, concurrently, host "B" is single leg connected to VSF Member 2...that's pretty much crystal clear.

     

    The only way to avoid communication crossing the VSF Link is (as it is usual in VSF scenarios) to terminate each host with a LACP aggregation (dual legged, at least) to both VSF Members...or, but that is not a really desired scenario, to connect both hosts to the same VSF Member.



  • 3.  RE: Aruba 5400R - VSF confusion

    Posted Jul 06, 2020 11:37 AM

    Dear Parnassus,

     

    I am honored that you have replied. I am seen your posts earlier and your replies are so crystal clear, they just make sense. 

     

    I wont drag this thread but the actual problem is, we have servers with dual connections to both VSF members like you said dual legged. But for some reason, the connections on SW2 are not working properly. All servers being tested are running on Redhat linux and NIC teaming is being done. Now the deployment team have said there is no need to perform any etherchannel, LACP or static trunk on switch and we haven't configured any trunk and apparently its not working. 

     

    Can you advise if trunk configuration is indeed required on the switch?



  • 4.  RE: Aruba 5400R - VSF confusion

    MVP GURU
    Posted Jul 06, 2020 01:10 PM

    @Ronin101 wrote: Now the deployment team have said there is no need to perform any etherchannel, LACP or static trunk on switch and we haven't configured any trunk and apparently its not working. 

    It really depends...it's always about what you are trying to achieve and what design approaches your systems let you to deploy: as example consider the case where you have a VMware ESXi host with - say - just two ports (of the same NIC or of different NICs)...now you can (a) connect those two ports on the VSF (one port into VSF Member 1, the other one into VSF Member 2) and doing absolutely nothing VSF side or you can (b) play with Non Protocol aggregation (static, in HP jargon is Port Truning of type "trunk" and not "lacp") and, correspondingly, doing the same on the terminating ports on your VSF (trunk 1/A1,2/A1 trk100 trunk) or, finally, you can (c) play with LACP aggregation (also called dynamic) and, correspondingly, configuring terminating ports on your VSF to be of type lacp (trunk 1/A1,2/A1 trk100 lacp). The fact you're going to select (a), (b) or (c) really depends on what was configured Servers side and the amount of automatic resiliency you want into those links.

     

    Consider that (a) - neither (b) or (c) (at least until you build a solution based on other virtualization technologies) - can be used against to separated switches, so where VSF is not a viable option.

     

    This preamble to say that conditions varies.

     

    Back on track...I will suggest to consider LACP Port Trunking VSF Side IF your Server Team can configure Ports Teaming/Bonding of type 4 (LACP IEEE 802.3ad) on your Servers...that way your servers will be dual homed (dual legged into) VSF Members...and the Link Aggregation Control Protocol will enhance resiliency (against port, cable, module, chassis, server NIC, server port) failures and will also improve the throughput (remember that usage of links in a Link Aggregation Group = Port Trunking is governed by the type - src/dst - of egressing traffic).

     

    I have dozens of servers' aggregated connections plus uplinks/downlinks to other switches made through LACP Port Trunks (working with 2|3|4x1G or 2|4x10G ports) terminating on a VSF and no a single issue in years of flawless working (on the contrary...only benefits).



  • 5.  RE: Aruba 5400R - VSF confusion

    Posted Jul 31, 2020 04:28 AM

    Dear 

     



  • 6.  RE: Aruba 5400R - VSF confusion
    Best Answer

    MVP GURU
    Posted Jul 06, 2020 01:16 PM

    @Ronin101 wrote: 1) I have 2 x 5412R connected via 10G copper cable (8 port10GT module) 2) I configured the vsf using below 2 commands vsf member 1 link 1 A8 vsf enable domain 2 3) The VSF between the 2 switches formed perfectly and i am seeing 1/A1 and 2/A1 ports etc

    Pay attention that VSF Link(s)'s aggregated throughput should be enough to sustain the highest possible amount of traffic potentially flowing from hosts/systems connected to VSF Member 1 with destination to hosts/systems connected to VSF Member 2 and vice-versa.

     

    So it will be a design error to work with just 1x10G VSF Link (1/A1 <--> 2/A1) and using all remaining 7+7 ports (70 Gbps of potential throughput per module/chassis) for hosts: it could happen that traffic generated by those systems will traverse the VSF Link and the VSF Links' limited-by-design-throughput will became rapidly the bottleneck.

     

    A possible way to overcome this worst-case scenario is to use the "dual homed servers" approach BUT also to double (or more) the VSF Link capacity.