Aruba Instant & Cloud Wi-Fi

Reply
Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

Of all the QoS Data frames:

Upload 1164/16970 (6,9%) are retransmissions.

Download 193/4612 (4,2%) are retransmissions.

 

Guru Elite
Posts: 20,822
Registered: ‎03-29-2007

Re: Client throughput capped at 40M instead of +200M occationally

The 400mbps number is the "rate" and not the throughput.  I would open a tac case and send them the packet capture and tech support of when it transitions from 400 to 40 to make sense of what is being seen.



Colin Joseph
Aruba Customer Engineering

Looking for an Answer? Search the Community Knowledge Base Here: Community Knowledge Base

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

Yes, I'm not expecting to get a throughput of 400M. I'm only expecting to keep the throughput around 200M as I see after a fresh reboot of the client.

 

I did a capture now after a fresh reboot where I can upload 200Mbps, and it looks like this:

Screen Shot 2017-04-14 at 17.33.41.png

But after it has dropped to 40M throughput it looks like this:

 

Screen Shot 2017-04-14 at 17.33.12.png

 

So when everything is good there is just a "Block Ack" every now and then, but when it missbehaves there is an "Acknowledge" frame sent back to the client after every single frame.

 

Does that give any clues?

 

I'll go ahead and open a TAC case as well now - Big thanks for your help so far.

 

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

Pcaps attached for the curius. 

 

iap-send-good100 are 100 packets from a newly rebooted client working fine (uploading 200Mbps+)

 

iap-send-100 are 100 packets from the same client a little later when it's only able to upload 40Mbps

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

Found another interesting fact. According to wireshark on my client laptop, I'm sending TCP packets as 1514 byte frames in both cases.

 

According to Wireshark on my MAC, passively sniffing the air in monitor mode, these frames are sent as 1606 bytes frames when it's working good, and 1610 bytes frames when not working so good.

 

Now to the really interesting part. On the desktop running the iperf server, if I tcpdump the traffic there, in the non-working case (40Mbps), I see all the frames as standard 1514 byte frames:

 

18:10:32.815832 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9808688:9810148, ack 1, win 53248, length 1460
18:10:32.816144 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9810148:9811608, ack 1, win 53248, length 1460
18:10:32.816149 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.51340: Flags [.], ack 9811608, win 936, length 0
18:10:32.816644 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9811608:9813068, ack 1, win 53248, length 1460
18:10:32.817615 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9813068:9814528, ack 1, win 53248, length 1460
18:10:32.817623 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.51340: Flags [.], ack 9814528, win 936, length 0
18:10:32.817874 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9814528:9815988, ack 1, win 53248, length 1460
18:10:32.818421 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9815988:9817448, ack 1, win 53248, length 1460
18:10:32.818426 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.51340: Flags [.], ack 9817448, win 936, length 0
18:10:32.818788 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9817448:9818908, ack 1, win 53248, length 1460
18:10:32.819111 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9818908:9820368, ack 1, win 53248, length 1460
18:10:32.819116 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.51340: Flags [.], ack 9820368, win 936, length 0
18:10:32.819373 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9820368:9821828, ack 1, win 53248, length 1460
18:10:32.819838 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.51340 > 192.168.88.99.5201: Flags [.], seq 9821828:9823288, ack 1, win 53248, length 1460

 

However, when it's working good, I receive them "jumbo-frames":

 

18:16:12.632084 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 4434: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55102472:55106852, ack 1, win 53248, length 4380
18:16:12.632091 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55106852, win 3709, length 0
18:16:12.632135 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55106852:55112692, ack 1, win 53248, length 5840
18:16:12.632141 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55112692, win 3709, length 0
18:16:12.632185 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55112692:55118532, ack 1, win 53248, length 5840
18:16:12.632192 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55118532, win 3709, length 0
18:16:12.632236 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55118532:55124372, ack 1, win 53248, length 5840
18:16:12.632242 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55124372, win 3709, length 0
18:16:12.632285 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55124372:55130212, ack 1, win 53248, length 5840
18:16:12.632292 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55130212, win 3709, length 0
18:16:12.632339 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 1514: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55130212:55131672, ack 1, win 53248, length 1460
18:16:12.632528 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 4434: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55131672:55136052, ack 1, win 53248, length 4380
18:16:12.632534 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55136052, win 3709, length 0
18:16:12.632580 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55136052:55141892, ack 1, win 53248, length 5840
18:16:12.632587 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55141892, win 3709, length 0
18:16:12.632630 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55141892:55147732, ack 1, win 53248, length 5840
18:16:12.632636 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55147732, win 3709, length 0
18:16:12.632680 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 5894: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55147732:55153572, ack 1, win 53248, length 5840
18:16:12.632686 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55153572, win 3709, length 0
18:16:12.632729 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 2974: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55153572:55156492, ack 1, win 53248, length 2920
18:16:12.632735 d4:be:d9:a3:bf:d3 > a4:34:d9:63:f9:0c, ethertype IPv4 (0x0800), length 54: 192.168.88.99.5201 > 192.168.88.53.49771: Flags [.], ack 55156492, win 3709, length 0
18:16:12.632779 a4:34:d9:63:f9:0c > d4:be:d9:a3:bf:d3, ethertype IPv4 (0x0800), length 4434: 192.168.88.53.49771 > 192.168.88.99.5201: Flags [.], seq 55156492:55160872, ack 1, win 53248, length 4380

 

 

Aruba Employee
Posts: 95
Registered: ‎09-10-2015

Re: Client throughput capped at 40M instead of +200M occationally

As far as i can see after comparing two pcaps is.

 

iap-send-100 has a single ack (NOT block ack) from IAP to client, sent at 24 Mbps every other frame and iap-send-good100 has a Block ack sent at 24 Mbps after 30 odd frames.

 

What made the IAP send block acks in one situation and not in other? A show tech might be able to tell, which explains client capabilities.

 

Kindly share one for non working and working scenario, like you did for pcaps.

 

Plz also point the mac of good client ( i assume a4:34:d9:63:f9:0c) and bad client (after introducing which throughput drops down). If you have already the shared this info, my apologies,  please share it again once with mac address info.

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

I've come to the same conclusion and am just reading up on frame aggregation (A-MPDU & A-MSDU).

 

Attached are the two show-techs.

 

Correct, a4:34:d9:63:f9:0c is the client I'm testing with, both in the working and non-working cases. After I reboot this client it "works" and then a few minutes later it starts misbehaving.

 

I doubt it can only be that it stops performing frame aggregation though. I think it's more likely something that causing it push back on several different capabilities and frame aggregation is just one that we can easily identify.

 

I base that conclusion on the fact that sometimes the client locks at a throughput of only 20M, sometime about 60M, but most times at 40M (pratical TCP throughputs, not bitrates), while frame aggregation is an on/off capability, and I also double that it alone should decrease the throughput from 200M -> 40M.

 

But if we can figure out why this is being pushed back, we might understand the rest as well...

 

Aruba Employee
Posts: 95
Registered: ‎09-10-2015

Re: Client throughput capped at 40M instead of +200M occationally

What is the exact change from working scenario to non working scenario. Just associating macbook client f4:0f:24:19:b0:f9 (HT state AWvSsEBbM), after X minutes, brings down throughput of a4:34:d9:63:f9:0c (HT state  AWvSsEeBbM) from 200 to 40?

 

Power state awake / power save has no effect on results. Right?

 

HT state Legend:

 

UAPSD:(VO,VI,BK,BE,Max SP,Q Len)
HT Flags: A - LDPC Coding; W - 40MHz; S - Short GI 40; s - Short GI 20
D - Delayed BA; G - Greenfield; R - Dynamic SM PS
Q - Static SM PS; N - A-MPDU disabled; B - TX STBC
b - RX STBC; M - Max A-MSDU; I - HT40 Intolerant; t turbo-rates (256-QAM)
VHT Flags: C - 160MHz/80+80MHz; c - 80MHz; V - Short GI 160; v - Short GI 80
E - Beamformee; e - Beamformer
HT_State shows client's original capabilities (not operational capabilities)

 

You are on 4.3.0.0, are you open to trying 4.3.1.3, since its your lab? You can always go back to same software by switch-partition-reboot.

 

 

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

I haven't been able to isolate what triggers the degradation. At first I though it was when I connected other clients, sometimes my iPhone, sometimes another client etc. But my last 5 reboot attempt I've done nothing (on purpose) other than waiting a few minutes.

 

I'll try to shut down all other clients to see if it still happens.

 

Regarding Power Save, I've set Windows Power Options for Wifi to "Maximum Performance" both on AC and DC. That made quite some difference when running on DC, but all tests I do now is on AC.

 

I can't see that the capabilites (HT_State) changes for my specific client between working / non-working.

 

I'm happy to try new versions. While testing last week at my customer using IAP-305 we tested on 4.3.1.0 and 4.3.1.1 with similar results.

 

I'll try with 4.3.1.3 (if it's available from the Aruba support site) on my IAP-315 now...

Contributor II
Posts: 48
Registered: ‎02-23-2017

Re: Client throughput capped at 40M instead of +200M occationally

I've now turned all other clients off and cleared the clients-table in the IAP. Everything worked fine with my single client, so I started thinking it was some other client triggering it.

 

Then I just disconnected my client, and reconnected again and BOOM, problem is back...

 

Attached a another show-tech with this single client connected.

 

I'll now proceed with the upgrade. I don't see any obvious fixes in the release notes though, but it can't harm to test.

Search Airheads
Showing results for 
Search instead for 
Did you mean: