Thanks Vincent, I have loaded up this config and will observe what happens over the next few days
Original Message:
Sent: Jun 16, 2021 05:32 AM
From: Vincent Giles
Subject: Aruba-CX port queuing and TX drops
First, see the factory default value for queue-profile and schedule profile:
8360-1# show qos queue-profileprofile_status profile_name-------------- ------------applied factory-default8360-1# show qos queue-profile factory-defaultqueue_num local_priorities name--------- ---------------- ----0 0 Scavenger_and_backup_data1 12 23 34 45 56 67 78360-1# show qos schedule-profileprofile_status profile_name-------------- ------------applied factory-default8360-1# show qos schedule-profile factory-defaultqueue_num algorithm percent weight max-bandwidth_kbps--------- ------------- ------- ------ ------------------0 dwrr 11 dwrr 12 dwrr 13 dwrr 14 dwrr 15 dwrr 16 dwrr 17 dwrr 1
Then please consider that:
- Queue profiles must have contiguous queues that start with queue 0.
- The queue profile and the schedule profile cannot contain different queues.
Create a custom queue profile that merges local-priorities:
qos queue-profile custom1 map queue 0 local-priority 0,1 map queue 1 local-priority 2,3 map queue 2 local-priority 4,5 map queue 3 local-priority 6,78360-1# show qos queue-profile custom1queue_num local_priorities name--------- ---------------- ----0 0,11 2,32 4,53 6,7
Create your custom schedule-profile. Here the show once created:
8360-1# show qos schedule-profile cust1queue_num algorithm percent weight max-bandwidth_kbps--------- ------------- ------- ------ ------------------0 dwrr 11 dwrr 12 dwrr 13 dwrr 1
And apply the custom queue-profile with the custom schedule profile globally.
8360-1(config)# apply qos queue-profile custom1 schedule-profile cust1
I hope this helps.
------------------------------
Vincent Giles
Original Message:
Sent: Jun 15, 2021 06:57 PM
From: Campbell Simpson
Subject: Aruba-CX port queuing and TX drops
Yes merging queues would be a viable option for me. Do you know what would that look like configuration wise? The documentation only mentions the 6 queues per port. Is it a matter of just mapping traffic to only 3 of those?
------------------------------
Campbell Simpson
Original Message:
Sent: Jun 15, 2021 05:30 PM
From: Vincent Giles
Subject: Aruba-CX port queuing and TX drops
The trial with 10G was just to make sure this was congestion due to bandwidth difference, but it is likely to be the case.
Is there a way to merge some COS together.
In this way, say you merge local-priority 0/1, 2/3, 4/5, 6/7, 8/9, then you reduce the queue-profile to half of the current queues
with the positive impact to have twice as much as buffer space per queue.
You should see minimum drops.
Is it possible to merge Classes of Service ?
------------------------------
Vincent Giles
Original Message:
Sent: Jun 15, 2021 05:08 PM
From: Campbell Simpson
Subject: Aruba-CX port queuing and TX drops
That is my guess, however I'm hoping there is some way to view/set buffer depths. If a test with 10G downlinks shows the drops not occurring then I'm left with an expensive decision to make, go 10G optics or hope my customer doesn't notice the tail dropping.
------------------------------
Campbell Simpson
Original Message:
Sent: Jun 15, 2021 01:26 PM
From: Vincent Giles
Subject: Aruba-CX port queuing and TX drops
It looks like a traffic burst from receiving 10G link to forwarding 1G link.
Drops would be normal as at a given time, the buffer is full due to BW differences.
Drops should be seen on lower priority queues.
Could you use temporarily 10G interfaces from 2930 to check that drops no longer happen ?
------------------------------
Vincent Giles
Original Message:
Sent: Jun 14, 2021 04:42 PM
From: Campbell Simpson
Subject: Aruba-CX port queuing and TX drops
I have a VSX pair of 8360s which connect to an 2930F access switch via 2 x 1Gbps multi-chassis LAG (1G port per 8360). Upstream of that is my site router and mobility controller both connected at 10Gbps. The 8360 is showing output queue drops egress towards the 2930F. We are still getting this site setup and tested so the traffic volumes are minimal yet I'm seeing output queue drops. Drops looks to be traffic volume related as aren't always going up.
I'm running 10.07 and it doesn't look like I can see any useful information on how queuing is working, let alone control things like buffer depths. Am I missing something?
I'm wondering if I'm encountering micro bursts with 10Gbps upstream connections to MCs, site router but 2 x 1Gbps LAG to the access switch.
# sh interface 1/1/4 que
Interface 1/1/4 is up
Admin state is up
Tx Bytes Tx Packets Tx Drops
Q0 173131 545 0
Q1 18862005973 28129466 2848
Q2 23867 104 0
Q3 6517056 39015 0
Q4 4409464 5150 0
Q5 505826240 181957 0
Q6 9001512 35934 0
Q7 44399772 350220 0
# sh int 1/1/4 qos
Interface 1/1/4 is up
Admin state is up
qos trust dscp (global)
qos queue-profile factory-default (global)
qos schedule-profile factory-default (global)
# sh int lag4 qos
Aggregate-name lag4
Admin state is up
qos trust dscp (global)
qos queue-profile factory-default (global)
qos schedule-profile factory-default (global)
------------------------------
Campbell
------------------------------