Wired Intelligent Edge

 View Only
last person joined: yesterday 

Bring performance and reliability to your network with the HPE Aruba Networking Core, Aggregation, and Access layer switches. Discuss the latest features and functionality of your switching devices, and find ways to improve security across your network to bring together a mobile-first solution
Expand all | Collapse all

Best Stacking Layout

This thread has been viewed 27 times
  • 1.  Best Stacking Layout

    Posted May 30, 2023 01:03 PM

    I've tried to get an answer from my sales engineer and just got a run around/non-answer.

    We have a bunch of CX6200F switches coming in to replace end of life 2920s.

    Currently, our 2920s are all stacked in pairs with dual stacking cables between them, and each switch has a 10G fiber uplink to the core trunked in LACP for the stack.

    The issue is that we have a few closets with 6-10 switches and therefore 3-5 switch stacks in some closets.

    I'm trying to determine if there is a good reason that the switches were paired. Or if I should be configuring one big stack per closet with each switch connected to the next one and then the bottom switch connecting back to the first. Or another option, instead of pairs or a large stack, using stacks of 3 where each switch is connected to the other 2 switches in the stack.

    Since I'm replacing half our switches it seemed like the right time to look into switching up how our stacking is configured.


    Thanks



  • 2.  RE: Best Stacking Layout

    EMPLOYEE
    Posted May 30, 2023 01:40 PM

    Aruba 6200Ms (required for HPE GL projects vs 6200Fs which do not have redundant PSUs/fan trays) have several best practices in HPE Site Planning and Electrical Safety Standards (SOP):
    1) Ingest-side of chassis must be flush-mounted to the front EIA rails or mitigation of any gap between front of rack and fan/PSU face must be deployed to prohibit them ingesting hot rack-internal air.
    2) If power-to-port air flow direction chassis are deployed then they should be deployed in pairs and an HPE Blank Filler Panel or a 1U brushed panel should be between the next IT chassis so that the power cords can be either ran thru the punched-out Filler Panel ports or the brushed area.
    3) HPE 1U Universal 4-post Rack Mounting Kits should be used to mount the chassis.
        a) We have seen that the newer 6300Ms and 4-post devices allow the chassis to be flush mounted to the front EIA rails.
        b) Older 6300Ms and 4-post devices require the use of the 4-post Rack Mounting Kit AND the Duct Kit which assures that only cold aisle air is ingested by the PSUs/fan trays.
        c) If the Duct Kit is required, then in order to provide access to the hot-swappable PSUs and fan trays, it is advised to mount the pair of 6300Ms with the removable top side of the ducts discarded and the upper 6300M's duct mounted upside down = provides a 2RU space for one's hand to get in to the active chassis' PSU/fan(s) for remove/replace.
    See examples attached.



    ------------------------------
    Robert Hughes, PE, RITP
    DCTS
    HPE Services
    ------------------------------



  • 3.  RE: Best Stacking Layout

    Posted May 30, 2023 01:48 PM

    They are CX6200F switches.

    I'm just trying to figure out what the best practice for connecting the stacks is.




  • 4.  RE: Best Stacking Layout

    MVP GURU
    Posted May 30, 2023 05:31 PM

    So the ultimate question is: is it better to deal with a number of small VSF stacks or to deal with a large VSF stack? part of the answer is directly influenced by the number of uplinks you're able to deploy between a large stack (or between small stacks) and your Core (keep in mind that, for a large VSF stack, Conductor and Standby switches should be placed in the middle part of the ring facing the Core switch and uplinks, if just two of them at worst are available, should depart from those switches to reach the Core).