Wired Intelligent Edge

last person joined: yesterday 

Bring performance and reliability to your network with the HPE Aruba Networking Core, Aggregation, and Access layer switches. Discuss the latest features and functionality of your switching devices, and find ways to improve security across your network to bring together a mobile-first solution
Expand all | Collapse all

Aruba 3810M -- Backplane Stacking and Distributed Trunking

This thread has been viewed 9 times
  • 1.  Aruba 3810M -- Backplane Stacking and Distributed Trunking

    Posted Jul 04, 2019 09:12 AM
      |   view attached

    Hi,

     

    I´m planning to build up a constellation where I´d like to create 2 stacks - each with 2x different Aruba3810M (1x 3810M-48G PoE+ and 1x 3810M-16X) and connect both through a Distributed Trunk! The Server(s) should also be conntect on both Stacks for redundancy with a Distributed Trunk!

     

    Questions:

    -) Backplane Stacking of different 3810M should be possible, or?

    -) is it possible to use Backplane Stacking and Distributed Trunking at the same time without any problems respectively is this a recommended and supported Aruba design?!

    -) is there anything someting special to note on the server side configuration according Distributed Trunking?

    -) as I can see there is NO support for 10G-BaseT SFP+ transceiver on the 3810M, do you have a recommendation for a Predomint 10G Switch (preferably copper)?

     

    thanks in advance!

     

     



  • 2.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    EMPLOYEE
    Posted Jul 08, 2019 07:48 AM

    Yes you can use stacking modules on 3810s and black-plane stack the different 3810 models together. Then they will operate as one logical switch.

    So when they are stacked together, you can just use link aggregation (LAG) which is much simpler than using Distributed Trunking.

    On the server side you just have to enable LAG and off you go.

     

    Yes you are correct there is no support for 10G-BaseT SFP+ transceivers, however you can use the 4x port Smart rate module that can give you upto 10GE using cat6A on each of those ports.

     



  • 3.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    EMPLOYEE
    Posted Jul 08, 2019 02:34 PM

    Greetings!

     

    To amplify on using distributed trunking and backplane stacking in parallel: this can be done as long as the two stacks are both running the same software version. This gives you the advantage of redundant connections while maintaining separate control planes between stacks. This also provides the option to deploy smaller (2-3 member) stacks for variable port count requirements for which a pair of 5400R chassis would be excessive or to accommodate rack space constraints.



  • 4.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    Posted Jul 10, 2019 07:50 AM
      |   view attached

    @Matthew_Fern: what you have mentioned as advantages are exactly the reasons why I would like to implement this way :)

    but what I don`t understand is the statement: this can be done as long as the two stacks are both running the same software version.
    what does this mean in the case of a software change/update on one of the two stacks, for a certain time the software would here for sure be different?!

    Furthermore, I have added another graphic in which in addition to the server, aswitch stack (Model = 2930M) is connected as well on the two DT's! Would it be correct that the configuration on the server and on the switch-stack side would "only" be a normal LAG? Or is there anything about DT that I´ve to think about on that devices as well?

     

    in general, how is your personal expirence or opinion about the theme DISTRIBUTED TRUNKING
    Regarding: So when they are stacked together, you can just use aggregation (LAG) which is much simpler than using Distributed Trunking.



  • 5.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    Posted Jul 10, 2019 09:40 AM

    Interesting thread, this is the first I have heard of this distributed trunking feature.

     

    My take is that, for the diagram posted, an alternative would be to configure the links from Stack C-A and C-B as separate VLANs with routing and switch virtual interfaces. On the host end, either simply use host NIC teaming set to redundant cold standby (plain configuration switch ports, all config is on the host's NIC Team), or again have the two links from host be in separate VLANs with separate host IPs and use routing and a routing protocol to control traffic flow.



  • 6.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    MVP GURU
    Posted Jul 10, 2019 12:24 PM

    @adamg wrote: Interesting thread, this is the first I have heard of this distributed trunking feature.

    Because you never searched about Distributed Trunking on HPE Community (Networking section)...as example, very first threads about newly introduced VSF were started there...and I can assure you that there is at least a dozen of threads on which DT was discussed (also in a DT versus VSF comparison when VSF was announced). DT, in comparison to VSF, is historic.



  • 7.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    EMPLOYEE
    Posted Jul 10, 2019 01:33 PM

    Distributed trunking has actually been around for quite a while — it dates back to the old ProCurve 3500 series and K.14 software! It just doesn't come up in discussions all that often as it doesn't seem to be as widely used as backplane stacking and VSF these days.



  • 8.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    Posted Jul 15, 2019 04:10 AM

    @parnassus & @Matthew_Fern
    thank you so much for your feedback on this topic!

    I´ve got one more question about this... of course I`ve done some research to find more informations and found the following link = http://j-netstuff.blogspot.com/2015/06/distributed-trunking-in-hp-procurve.html?m=1
    on that page there`s the following entry which make`s me ruminative
    Quote:
    "Switch 1 and Switch 2 must be of the same model and running the exactly same software version for the ISC to work. The ISC could be a single link, or an aggregated link as shown in the example figure above (Trk1)."

    if this is true, than could it be possible that different models in the respective stacks do not support this DT feature?

     


    @Matthew_Fern wrote:

    Distributed trunking has actually been around for quite a while — it dates back to the old ProCurve 3500 series and K.14 software! It just doesn't come up in discussions all that often as it doesn't seem to be as widely used as backplane stacking and VSF these days.


    but what could be the reason that it`s not used that often as phys. Stacking or VSF? The advantage that no total failures occur during maintenance, etc., is in my opinion worth for considering! In a comparable way, personally, frankly, I don`t see it as a disadvantage / additional expense!



  • 9.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    Posted Aug 01, 2019 07:51 AM

    ... digging a little bit deeper into DT, some more questions came up :)

     

    is it true, that DT-Trunks will not participate in STP (BPDU-Filter should/would be enabled and suppress BPDUs) - if so, how does the loop prevention work?



  • 10.  RE: Aruba 3810M -- Backplane Stacking and Distributed Trunking

    MVP GURU
    Posted Jul 10, 2019 01:18 PM

    @#danW wrote: but what I don`t understand is the statement: this can be done as long as the two stacks are both running the same software version.
    what does this mean in the case of a software change/update on one of the two stacks, for a certain time the software would here for sure be different?!

    I think there is a tolerance: Switches pair must run the same branch software version (e.g. for two 5400R zl2 let we say KB.16.08), they could continue to be part of a DT if their respective running versions stay strictly within that branch version (so any build within the branch 16.08).

    Probably the worst case scenario is when a Switch (or an entire Stack as per your scenario) belonging to the DT pair is going to be upgraded to a newer branch version (e.g. from 16.08 to 16.09).

     


    @#danW wrote: Furthermore, I have added another graphic in which in addition to the server, aswitch stack (Model = 2930M) is connected as well on the two DT's! Would it be correct that the configuration on the server and on the switch-stack side would "only" be a normal LAG? Or is there anything about DT that I´ve to think about on that devices as well?

    Correct, peers see the DT pair as a single entity so normal LACP Port Trunks are enough to serve uplinks to DT pair.

    Instead of using just a 2 links LACP LAG on Server, a 4 links LACP LAG would be better deployed between it and the DT pair...the same could be said for Stack-C but - having a two members stack - it implies you should setup a 8 links LACP LAG from it to the DT pair (that's to cover any possible path and avoid any possible physical link/DT Switch failure combination).

     


    @#danW wrote: in general, how is your personal expirence or opinion about the theme DISTRIBUTED TRUNKING

    Regarding: So when they are stacked together, you can just use aggregation (LAG) which is much simpler than using Distributed Trunking.


    I think that, with that statement, @Matthew_Fern wanted to point out that setting up (and mantaining) a DT pair to serve peers is more complex than having a backplane stack serving the very same peers, in both cases peers use LACP LAGs...but at DT level the setup is more complex (a Stack would be simpler because there is no dt-lacp but just lacp to setup peer facing).