Hi Arne,
No csw-bu-r02 and csw-bu-r02 is the Core VSX Pair in Room BU. There is no blocking/STP on the ISL link.
The red lines (STP blocked) are to the Server and Client Distribution.
But my main problem/question is the starvation on the csw-bu-r02 coming from the lag53 from the root bridge csw-rz.
Hi Thomas,
I have configured a different MSTP Region for every layer (Core, Server, Client and WAN).
The core knows all VLANs, and the other layers only the relevant VLANs.
Default Gateway for all VLANs is the primary/active/master Fiirewall Cluster Member (in Room RZ).
Im using MCLAG wherevever possible, but physically created a loop by cross-connecting the CDSW and SDSW. The main reason is, that I want to avoid Layer2 traffic traversing the core and that traffic between clients will stay in the Client Layer. Same for the Server to Server Traffic between the different rooms.
If it is possible with aruba at some point, i would like to convert to MPLS. And to get the VLAN Default Gateways from the firewall to the Distribution Layer and switch to Layer3 only on the core.
------------------------------
Robert Großmann
------------------------------
Original Message:
Sent: Jun 02, 2022 04:37 AM
From: Thomas Siegenthaler
Subject: Aruba CX 8325 - Spanning Tree - Starvation
Hi Robert
sorry for the delayed response.
From the configuration I cannot see anything which is obviously wrong. However, from the drawing there is a couple of questions:
- Did you only write it on the drawing or did you indeed configure multiple MSTP regions throughout your network?
- It seems as you have multiple, redundant ways from the client access to the core (i.e. distribution layer is connected to the core and among itself). I cannot identify the reasoning behind as in modern network architectures one tries to avoid loops and instead work with MLAGs. This is perfectly feasible in your network. Can you explain a little bit in further detail?
Given by the loops and STP in the need to block the redundant links and not knowing the running state of the spanning tree this could be a source of problems which result in BPDU stravation.
Regards,
Thomas
------------------------------
Thomas Siegenthaler
Original Message:
Sent: Jun 01, 2022 03:42 AM
From: Robert Großmann
Subject: Aruba CX 8325 - Spanning Tree - Starvation
Hi Thomas,
Core VSX Pair 1 (csw-rz-r08 and csw-rz-r09) is the STP Root Bridge with priority 4096
Core VSX Pair 2 (csw-bu-r02 and csw-bu-r03) is the "backup" STP Root Bridge with priority 8192.
The config on both pairs is consistent as "show vsx config-consistency" and manual comparing by winmerge.
Even manual comparing of both VSX pairs by winmerge looks good.
You can find the complete network diagram from a Visio export --> Netzwerk-Redesign.png
All configs and the output of the "show vsx config-consistency" of both primary members have been attached to this post as Text-Files.
All Core Switches (CSW) are running GL.10.09.1000 --> Software-Versionen.txt
The phenomen with STP Starvation can be also seen on the other Aruba CX 8325 and 8360 Switches. At different dates and times.
It can not be seen on the new client access switches (casw) with Aruba CX 6200F.
It would be helpful to troubleshoot this behaviour, but also it would be very helpful if is possible to adjust the timers or threshold for STP/BPDU.
Thanks and kind regards
Robert
------------------------------
Robert Großmann
Original Message:
Sent: Jun 01, 2022 01:35 AM
From: Thomas Siegenthaler
Subject: Aruba CX 8325 - Spanning Tree - Starvation
Hi Robert
I couple of questions:
- which bridge is your root bridge? Is it Core Switch VSX Pair 1? (Prio is 4096; MAC 0a000c-010100)
- If the above question is a "yes" ... did you check for any VSX inconsistency in the cluster, especially regarding STP config?
- Can you share the entire config of both VSX clusters, e.g. all 4 switches?
- what software version are you running?
Regards,
Thomas
------------------------------
Thomas Siegenthaler
Original Message:
Sent: May 31, 2022 11:24 AM
From: Robert Großmann
Subject: Aruba CX 8325 - Spanning Tree - Starvation
Hi guys,
anyone an idea how to troubleshoot/solve this?
csw-bu-r02# sh logg -r | i arved2022-05-29T09:57:10.414727+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-05-21T09:51:39.414683+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-05-16T21:58:06.414560+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-05-06T11:08:43.414783+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-17T06:50:45.414692+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-13T12:07:40.414564+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-11T02:19:27.414569+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-10T14:01:24.414679+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-08T14:33:15.414594+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-05T20:22:28.414609+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-05T10:58:41.414564+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-04-01T08:44:08.414653+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-03-29T05:09:33.414615+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-0101002022-03-27T15:39:52.414685+02:00 csw-bu-r02 hpe-mstpd[2132]: Event|2008|LOG_INFO|AMM|1/1|CIST starved for a BPDU Rx on port lag53 from 4096:0a000c-010100
I did not find any information / cli config for stp starvation/bpdu throttleing on Aruba CX Switches.
The primary member of Core Switch VSX Pair 2 in Room BU receives the message above.
It is connected by MCLAG53 with the Core Switch VSX Pair 1 in Room RZ.
The connection is "dual-homed", so every member is connected with every member on the other side / in the other room.
All of these links are 40G (Transceiver QSFP+LR4 and singlemode connection with around 300-500m distance).
The ISL links are 100G, and the links to Distribution Switches are 100G. These Links are using QSFP28DAC1, QSFP28DAC3, and QSFP28SR4 (Multimode MPO/MTP).
As this new network is still not very used, as is was build up parallel to the old network, the links are not loaded.
I do not have any idea, why the switch says it is missing the STP BPDU.
The two VSX Pairs are in the same MSTP Topology and the VSX Pair 1 in Room RZ is the Root Bridge with Priority 4096.
csw-bu-r02# sh int lag 53Aggregate lag53 is up Admin state is up Description : csw-rz MAC Address : b8:d4:e7:de:f9:00 Aggregated-interfaces : 1/1/53 1/1/54 Aggregation-key : 53 Aggregate mode : active Speed : 80000 Mb/s L3 Counters: Rx Disabled, Tx Disabled qos trust dscp VLAN Mode: native-tagged Native VLAN: 998 Allowed VLAN List: all Statistic RX TX Total ---------------- -------------------- -------------------- -------------------- Packets 484985567 58161938 543147505 Unicast 103755975 56151612 159907587 Multicast 181339372 1699314 183038686 Broadcast 199890292 311012 200201304 Bytes 73006875428 17447356583 90454232011 Jumbos 9511111 2648978 12160089 Dropped 0 0 0 Filtered 27048 2 27050 Pause Frames 0 0 0 Errors 0 0 0 CRC/FCS 0 n/a 0 Collision n/a 0 0 Runts 0 n/a 0 Giants 0 n/a 0csw-bu-r02# sh int 1/1/53Interface 1/1/53 is up Admin state is up Link state: up for 2 months (since Thu Mar 10 21:49:10 CET 2022) Link transitions: 1 Description: csw-rz-r08_P_53 Persona: Hardware: Ethernet, MAC Address: b8:d4:e7:de:f9:95 MTU 9198 Type QSFP+LR4 Full-duplex qos trust dscp Speed 40000 Mb/s Auto-negotiation is off Flow-control: off Error-control: off Rate collection interval: 300 seconds Rate RX TX Total (RX+TX) ---------------- -------------------- -------------------- -------------------- Mbits / sec 0.05 0.01 0.06 KPkts / sec 0.05 0.00 0.05 Unicast 0.00 0.00 0.00 Multicast 0.02 0.00 0.02 Broadcast 0.03 0.00 0.03 Utilization 0.00 0.00 0.00 Statistic RX TX Total ---------------- -------------------- -------------------- -------------------- Packets 444731259 38878839 483610098 Unicast 68160585 37339434 105500019 Multicast 179411336 1228391 180639727 Broadcast 197159410 311014 197470424 Bytes 58978163297 14680671034 73658834331 Jumbos 4026723 2535102 6561825 Dropped 0 0 0 Filtered 27032 1 27033 Pause Frames 0 0 0 Errors 0 0 0 CRC/FCS 0 n/a 0 Collision n/a 0 0 Runts 0 n/a 0 Giants 0 n/a 0csw-bu-r02# sh int 1/1/54Interface 1/1/54 is up Admin state is up Link state: up for 2 months (since Thu Mar 10 21:49:10 CET 2022) Link transitions: 1 Description: csw-rz-r09_P_54 Persona: Hardware: Ethernet, MAC Address: b8:d4:e7:de:f9:8d MTU 9198 Type QSFP+LR4 Full-duplex qos trust dscp Speed 40000 Mb/s Auto-negotiation is off Flow-control: off Error-control: off Rate collection interval: 300 seconds Rate RX TX Total (RX+TX) ---------------- -------------------- -------------------- -------------------- Mbits / sec 0.00 0.00 0.00 KPkts / sec 0.00 0.00 0.00 Unicast 0.00 0.00 0.00 Multicast 0.00 0.00 0.00 Broadcast 0.00 0.00 0.00 Utilization 0.00 0.00 0.00 Statistic RX TX Total ---------------- -------------------- -------------------- -------------------- Packets 40254565 19283120 59537685 Unicast 35595425 18812196 54407621 Multicast 1928131 470924 2399055 Broadcast 2731009 0 2731009 Bytes 14028740126 2766688451 16795428577 Jumbos 5484388 113876 5598264 Dropped 0 0 0 Filtered 16 1 17 Pause Frames 0 0 0 Errors 0 0 0 CRC/FCS 0 n/a 0 Collision n/a 0 0 Runts 0 n/a 0 Giants 0 n/a 0
------------------------------
Robert Großmann
------------------------------