Network Management

 View Only
last person joined: yesterday 

Keep an informative eye on your network with HPE Aruba Networking network management solutions
Expand all | Collapse all

correct cabeling for trk/lcap

This thread has been viewed 23 times
  • 1.  correct cabeling for trk/lcap

    Posted Nov 12, 2021 10:57 AM
    Hello,

    I have a short question about our network configuration.

    We have a network with a few 2920/2930 and two 5406.
    All Switches are connected with 10G fiber to both 5406

    The connection is configured as a trk (link aggregation) but not aggregation to one 5406 with 2 x fiber.
    It's configured 1 fiber to 5406 at DC 1 and 1 fiber to 5406 at DC2.

    The both 5406 are connected with 2 x fiber as trk - that should be correct.

    My question is - is it a correct configuration to trk over two different switches like this?
    I think I would confugure it as two different uplinks to each 5406.

    thx for your time


  • 2.  RE: correct cabeling for trk/lcap

    MVP GURU
    Posted Nov 12, 2021 11:23 AM
    No that is not the proper way to connect a peer switch (with a LAG=Port Trunk, with LACP or not) to yours two Aruba 5406 IF those two Aruba 5406 aren't configured either as VSF or as a Distributed-Trunking pair (DT-Pair).

    A Port Trunk (LAG) requires that its links co-terminate against either (a) a single physical peer (switch/host) or against (b) a single logical entity (a VSF, a DT-Pair, an IRF, a VSX, a Backplane Stack of Switches, etc.) otherwise part of its links will not be up on the LAG because it will not completely form as you expect.





  • 3.  RE: correct cabeling for trk/lcap

    Posted Jan 25, 2022 11:30 AM
      |   view attached
    Hello,

    thx for you reply - Im a little bit late with my response.

    You are right - the Trunks are configured as DT Trunks.
    After a little bit of investigation - I have the following question:

    The Access Switches are DT Trunk configured , the 5406 have a stable ISC Link Trunk in sync but no Keep-Alive established.

    If Im running "show distributed-trunking statistics peer-keepalive"  I got only:

    " DT peer-keepalive Status : Up
      DT peer-keepalive Statistics
      --------------------------------
      Tx Count : 0
      Rx Count : 0"


    and with "show distributed-trunking peer-keepalive" that:

    "Distributed Trunking peer-keepalive parameters

    Destination :

    VLAN : 0
    UDP Port : 1024
    Interval(ms) : 1000
    Timeout(sec) : 5"

    What is the result of that (mis)configuration?

    It seems to be everything is running well - the only thing is that I cant get 10G through my network - per one Connection 1.5-2G is the maximum. If Im running a second connection on the same source and target I can get 4-4.5G thats it. But surly I should get up to 20G ?

    Is there any other fail because of the missing Keep-Alive?

    I attached a quick draw of the config. Physically, each 5406 is on a different Site (600 meters) - the access Switches are connected "local" with Multimode 10G SFP+ and remote with Singemode SO2 10G SFP+.


    THX





  • 4.  RE: correct cabeling for trk/lcap

    MVP GURU
    Posted Jan 26, 2022 05:45 PM
    Hi, you wrote: "The Access Switches are DT Trunk configured" but - technically speaking - Access Switches (all peers uplinked to your HP 5406 DT Pair) should use a standard (so non DT) Port Trunking approach: in other terms Access Switches should use normal Port Trunks (type trunk or type lacp) while your DT Pair at the top of your network diagram should configured to use DT-Trunks (type DT-trunk or DT-lacp: types of links aggregations should match, "LACP" - referenced as lacp - on both ends or "Static" - referenced as trunk - on both ends).

    Example: DT-lacp on 5406 DT Pair side and lacp on Access Switch side OR DT-trunk on 5406 DT Pair side and trunk on Access Switch side...not a mix.

    You should post sanitized configurations of your DT Pair and Access Switch so we can be of help.

    In meantime have a look to this guide.


    ------------------------------
    Davide Poletto
    ------------------------------