Technical Webinar- LACP and distributed LACP – ArubaOS Switch
08-28-2018 07:12 AM - edited 11-05-2018 07:43 AM
Adding this post here to share the content of the Airheads Technical Webinar we delivered on today, August 28th on LACP and distributed LACP – ArubaOS Switch. For those who could not attend the session please find below:
- Webinar Recording:
- Webinar Slides:
Please note that you can find additional on-demand technical webinars on our Airheads webinar repository page.
As well, the webinar calendar up to December 2018 is available here.
Please feel free to leave any additional comments and questions you may have below. We will make sure to answer them as soon as possible.
Re: Technical Webinar- LACP and distributed LACP – ArubaOS Switch
09-27-2018 06:41 AM
LACP and distributed LACP – ArubaOS Switch Webinar Q&As
Q1: LACP duplex mode full only or Full and half duplex. Last slide told us only full duplex.
LACP requires the ports to be in Full duplex mode
Q2: Which AOS version are you referencing in this training ?
My switches were running on WC.16.05.0009
Q3: so, if I have 2 servers with 10G nics and 2 switches with lacp on 4 port 1G, the maximum speed between servers are 1G?
Ideally, you should get maximum of 4 gig if you are really utilizing all links in the trunk
Q4: Are the information in this webinar also true for Aruba wireless controller's port aggregation?
Yes, we do support Aruba wireless controller for LACP
Q5: is load balancing type configurable per-aggregation or globally only?
Its global configuration
Q6: is it possible to assign a name to a trunk?
No. You cannot assign name to a lacp/trunk
Q7: how can you check which link is used for certain traffic, i.e. which link was selected via the hash algorithm ?
how to switch from default L3-mode to L4-mode of the algorithm ?
The show trunks load-balance interface command displays the port on which the information will be forwarded out for the specified traffic flow with the specified source and destination address
i,e ping was initiated from core-2
Core1(config)# show trunks load-balance interface trk1 mac f40343-0f6260 f40343-0f8260 ip 10.17.172.11 10.17.172.12 inbound-port 9
Traffic in this flow will be forwarded out port 9 based on the configured
Layer 3 load balancing
To choose L3/L4 mode
Core1(config)# trunk-load-balance ?
L3-based Load balance based on IP Layer 3 information in packets.
L4-based Load balance based on Layer 4 information in packets.
Core1(config)# trunk-load-balance l4-based
Q8: must "trunk-load-banace" be consistent on both peer?
It’s not mandatory, since it applies for the egress traffic
Q9: Scenario to use with VMware, please. in most documentation no LACP, just TRUNK. https://kb.vmware.com/s/article/1004048
If VMware doesn’t support LACP and then there is no point in configuring LACP on the HPE switch since there won’t be any LACPDU messages exchanged between them, so the best way is to setup both end as Trunk. Also setting up LACP on switch side and Trunk on server will still work , just the LACP protocol messages will not be shared
VMware can also be deployed on the DT switch topology as the dt switch just considers it as a another downstream switch running LACP/trunk.
Q10: If I have to transfer vm from one server to another I have a great amount of data, I expect to see 4G troughputh, but I only see 800Mbps, so, I think it is used only one port of the trunk, I expected 4G.
Request to open a support case to investigate
Q11: Could I have the information on VMware connectivity too?
We do support VMware and it depends on VMware support for LACP or NIC bonding
Q12: Server have a 10G port, so maximum speed is greater than trunk speed
NO, if the trunk link matches the same 10G port of the server
Q13: What bandwidth requirements are there for ISC and keepalive links? can it be lower than the data links between the DTS pair?
Its recommended to have same data link speed on the ISC link as it would be helpful during the DT link failure as the ISC link can handle the same traffic bandwidth . It’s not mandatory for the keepalive link also to be on the same speed as its just used only for keepalive messages between DT switches
Q14: STP on DT to an switch?
STP packets will ignored on the DT switches
Q15: Is Distributed Trunking possible between two pairs of switches, e.g. from two core switches to two access switches, with full mesh physical links present?
Yes. Six pack topology is supported
Q16: What is the reason of developing this mechanism (DT-LACP) in comparison with the stacking technology (backplane stacking or VSF). Is it for the switches where the stacking is not possible ?
Both are different technologies. Though both of them provide node level resiliency, VSF has only single control plane and its easy for management and can have higher port density [ based on the number of switches in the stack]
Q17: What does DT LACP support mean by switch model? Should switches be exactly the same model or just same series ? 5406R vs 5412R ?
Yes. The DT switches should be the same version and model [ eg both 5406R]
Q18: the newer HW does support VSF (starting with 16.3.x), what's the recommendation ? use VSF instead of DT ?
VSF is more recommended since it has only single control plane and offers better manageability and port density
Q19: Can DT be applied to stacked switch?
DT cannot be applied to VSF stack but should work in back plane stacking
Q20: Can the ISC link itself be a trunk, or does it need to be an interface separate from the data interlink between the pair?
Yes, ISC can be a trunk or a standalone port
Q21: what is the difference between keywords "dt-lacp" and "dt-trunk" in the config line Trunk A13 trk1 dt-lacp?
dt-lacp - The port belongs a trunk group using the Distributed Trunking protocol, that can split trunks across two switches where LACP packets are exchanged
dt-trunk - The port belongs to a trunk group using the Distributed Trunking protocol, which splits trunks across two switches that cannot handle LACP packets.