Hello Giles,
Sorry for returning on this thread but now we're hardly trying to verify some traffic performances/patterns strictly related to all of our VSX LAGs implementation.
I think my last post on this thread was clear enough to let you see that we are exactly using the hashing algorithm:
@vincent.giles wrote: The hashing algo would have an impact if you would have 4 links in VSX LAG, 2 per switch, downstream to the server.
Indeed, with regards to the scenario shown months ago, now we have strictly only 2 or 3 ports per VSX member, ports part of multi-chassis LAGs (so only VSX LAGs with 4 or 6 ports)...basically now our lag2-lag7, lag14-lag15 (lag8-lag13 are going to be dismissed) are used to connect to our source Servers (Backup clients), those ones transmit large amount of data daily to one destination Server (Backup server) which is connected via lag1.

Given the scenario above we noticed that, as you described, VSX keeps the traffic local on each node and try to minimize ISL usage (we see incoming traffic on above VSX LAGs flowing distributed to lag1 preferring to egress by interfaces distributed on both VSX nodes other than preferring to egrees from interfaces belonging to just one VSX member) and that is good.
We instead have an issue: ArubaOS-CX doesn't actually provide us a command [*] (as ArubaOS-Switch or Comware do) to understand how egress traffic leaving the VSX will be distributed on interfaces part of the lag1 when the destination is the Backup server host (this to understand if, with the IP Addressing we have on source/destination hosts, the concurrent egress traffic leaving the VSX on the lag1 will be well balanced or not on all of its four interfaces 1/1/1,1/1/2,1/1/1 and 1/1/2).
The question rised because we're seeing (we're still investigating) that egress traffic leaving lag1 - traffic for Backup server - is somewhat preferring a particular pattern where 1/1/2 of VSX 1 and 1/1/2 of VSX 2 are heavily used and 1/1/1 of VSX 2 and 1/1/1 of VSX 1 are lightly used...so a case where outgoing traffic looks a little bit unbalanced instead of being equally spread on all 1/1/1, 1/1/2 of VSX 1 and 1/1/1, 1/1/2 of VSX 2 interfaces.
This is happening with at least 10 Backup clients concurrently sending data using fixed TCP ports and the SRC IP Address variability should grant us good distribution (at least we expected that).
It's totally possible we're falling in a undesired corner case where there is some hashing polarization (Layer 3 Hashing based on our SRC/DST IP Addresses produces interfaces utilization pattern where 1/1/2 of VSX 1 and 1/1/2 of VSX 2 are used more on VSX lag1)...but we can't prove it without the missing command cited above.
So, long story short, how can we simulate/calculate how egress traffic is distributed through VSX lag1's interfaces considering we exactly know SRC/DST Addresses of all involved data streams?
[*] like the show trunks load-balance interface <TRUNK-ID> mac <SRC-MAC-ADDR> <DEST-MAC-ADDR> [ ip <SRC-IP-ADDR> <DEST-IP-ADDR> [<SRC-TCP/UDP-PORT> <DEST-TCP/UDP-PORT>] ] inbound-port <PORT-NUM> ether-type <ETHER-TYPE> inbound-vlan <VLAN-ID> CLI Command available on ArubaOS-Switch