Network Management

 View Only
last person joined: yesterday 

Keep an informative eye on your network with HPE Aruba Networking network management solutions
Expand all | Collapse all

Trying to understand LACP with multiple switches and servers

This thread has been viewed 12 times
  • 1.  Trying to understand LACP with multiple switches and servers

    Posted Mar 08, 2021 10:11 AM
    HI Everyone,

    I have two 2930F aruba switches that are connected to, 3 servers, a SAN, firewalls and netgear nas. I currently have every setup in vLAN's one for data, one for internal lan and on for scsi. Each server has two 4x1Gb network cards and each card goes to one of the two switches for redundany, for example;

    Server 1, NIC 1, Port 1 - 4 -> switch1
    Server 1, NIC 2, Port 1 - 4 -> switch2

    On the servers i use Citrix Hypervisor (Xenserver), and the VM's connecting across 4 port BOND'd network ports, for example vLAN20 (Data network, 10.0.0.1/24) is set as an active/active (not lacp), two of those cables are NIC1 and two are NIC2. Just means that if a NIC fails the BOND still works and nothing goes down.

    I've been seeing a lot of read/write wait times on our servers and am starting to think that the network is being overloaded during peak times, so was considering changing the bonds to LACP, i just want to check with everyone on a few things.

    1. Can i do LACP mixed across two switches?
    2. Will i retain my DR setup
    3. Will it actually speed up my network or is there a potential bottle neck i'm missing.

    I'm a reasonable tech, pretty good at most things, but networks aren't my strong point, so any advice would be welcomed. Feel free to message for future details.

    Thanks everyone.

    ------------------------------
    Nathan Platt
    ------------------------------


  • 2.  RE: Trying to understand LACP with multiple switches and servers

    MVP GURU
    Posted Mar 08, 2021 12:29 PM
    Hello Nathan, better to ask the Airheads Community moderator to move your post into the Wired Intelligent Edge space here.

    Back to your scenario and questions:
    1. Can i do LACP mixed across two switches? No, you can't. Links Aggregation (Static or LACP) requires to have its links member to co-terminate against a single switch like a Standalone Aruba 2930F (or against a single "logical entity" like when two - or more - Aruba 2930F are deployed in VSF, as known as a Frontplane Stack).
    2. Will i retain my DR setup? to me it's not a totally clear question.
    3. Will it actually speed up my network or is there a potential bottle neck i'm missing? generally Links Aggregation (LACP or Static) helps to better distribute egressing messages over two or more (generally up to 8) physical links...this distribution is efficient as long as there is a strong variability about Source and Destination parameters like IPs/Ports...so if you have a Server and a NAS only and Sever and NAS speak together only...resultant traffic distribution from Server to NAS and from NAS to Server could be highly polarized hitting a specific link of many forming the Links Aggregation (Port Trunk). This was discussed a lot of time...you have an "aggregated" bandwidth...this means that 4x1Gbps doesn't necessarily mean you have a single message with a throughput of 4Gbps...but more messages using up to four ports potentially saturating each of them at 1Gbps. 


    ------------------------------
    Davide Poletto
    ------------------------------



  • 3.  RE: Trying to understand LACP with multiple switches and servers

    Posted Mar 08, 2021 06:39 PM
    Thanks for helping me out, first how do i ask the mods to move it, is there a tag i use?

    Can i try explaining the situation a little better and see if you can help? The three servers run Citrix hypervisor and have around 20 VM's across all 3 machines. Each machine has two NIC's (4 x 1GB) connected to the switches (for example nic 1 goes to switch 1 and nic 2 to switch 2).

    There's 3 vlans;
    • VLAN 1 - 10.0.1.x
    • VLAN 10 - 10.0.2.x
    • VLAN 20 - 10.0.0.x

    vLan 1 is the management network (so the internal servers, monitoring, physical server addresses etc)
    vLan 10 is the iSCSI network (no jumbo frames)
    vLan 20 is the main network (everything on 10.0.0.x is internet facing, or could be)

    The main problem i'm seeing is that all my vm's are seeing;

    /dev/xvda:
    Timing cached reads: 11470 MB in 1.99 seconds = 5764.43 MB/sec (upto 9000 MB/sec)
    Timing buffered disk reads: 16 MB in 3.01 seconds = 5.31 MB/sec (upto 40 MB/sec)
    (most of this is caused by high IOWait time)

    If you look on the switches at this time (23:36 UK time) the ports are only showing around 12 - 15%

    I have a MSA 2050 san that connects to those switches 4 ports to one switch and 4 ports to other switch (all 1Gb cables), no jumbo frames

    I don't understand how i'm having such low HD speeds when the two switches aren't even working that hard.

    Where can i start?

    ------------------------------
    Nathan Platt
    ------------------------------