Wireless Access

last person joined: yesterday 

Access network design for branch, remote, outdoor, and campus locations with HPE Aruba Networking access points and mobility controllers.
Expand all | Collapse all

High Fragmentation

This thread has been viewed 22 times
  • 1.  High Fragmentation

    Posted Jan 24, 2012 10:30 AM

    Is it normal to have 40-60% packet fragmentation for clients with sustained bandwidth usage?

     

    I am observing this in Monitoring > Clients > Client Activity > Aggregate Statistics

     

    This seems to apply to all clients, on various SSIDs with WEP/WPA2-AES/CaptivePortal.



  • 2.  RE: High Fragmentation

    EMPLOYEE
    Posted Jan 24, 2012 11:05 AM

    @drewbaldock wrote:

    Is it normal to have 40-60% packet fragmentation for clients with sustained bandwidth usage?

     

    I am observing this in Monitoring > Clients > Client Activity > Aggregate Statistics

     

    This seems to apply to all clients, on various SSIDs with WEP/WPA2-AES/CaptivePortal.


    I have never personally seen that as an issue.  It would be good if others would chime in.



  • 3.  RE: High Fragmentation

    Posted Jan 25, 2012 02:40 PM

    If the client sends a packet sized larger than the MTU size, it will be fragmented. We would need to better understand your configured MTU size on the controller as well as on its upstream switches, etc. Can you get an in-air packet capture to assess fragmentation?



  • 4.  RE: High Fragmentation

    Posted Jul 31, 2013 12:50 AM

    i am also experiencing this issue with a certain building.

     

    Site #1

    2 Controllers ((6.1.3.5 code)

    6 buildings with ~ 40 APs per building

    Per floor AP group with dedicated floor VLAN - 20Mhz channel settings (both radios)

    Switch port MTU is 1500

    Controller is default set for AP port MTU

    Clients complain about slow printing

     

    Site #2

    2 Controllers (6.1.3.5 code)

    1 building with 30 APs

    One AP group with dedicaed VLAN - 20Mhz channel settings (both radios)

    Switch port MTU is 1500

    Controller is default set for AP port MTU

    No reported issues but will revisit the site to ensure identical printing test is performed

     

     

     

    Site #1

    ICMP Test:

     

     

    15 pings to the client ip address (doing this overnight with long-term connected WLAN clients) using 1533 padded byte payloads for ~ 100 seconds.

    Result:: In the problem site location, clients hit 40%+ fragmented packets with around 760kbps

     

    15 pings to the client ip address (doing this overnight with long-term connected WLAN clients) using 32 padded byte payloads for 300 seconds.

    Result: In the problem site location, clients hit 0% fragmented packets wtth around 25kbps

    note: repeated test and now fragmented packets show as over 40%

     

    During the printing test, fragmented packets are over 40%

     

    AP ports are set to auto speed/duplex

    AP switch port reflects oversized frames and giants (discards) - slow and steady incrementing along with input errors

    Controller using lacp with 4 1Gig ports - no errors on the physical interfaces nor on the port-channel interface

    Note: seeing the incremental input and giant discards across

     

    Site #2

    ICMP Test:

     

     

    15 pings (sessions) to the client ip address (doing this overnight with long-term connected WLAN clients) using 1533 padded byte payloads for ~ 100 seconds.

    Result:: In the non-problem site location, clients hit 0%+ fragmented packets with around 3000kbps

     

    15 pings to the client ip address (doing this overnight with long-term connected WLAN clients) using 32 padded byte payloads for 300 seconds.

    Result: In the problem site location, clients hit 0% fragmented packets wtth around 25kbps

    note:ran this test for 45 minutes again using 1533 payload with 0% fragmented packets

     

    Still need to test printing: 

     

    AP ports are set to auto speed/duplex

    AP switch port reflects oversized frames and giants (discards) - slow and steady incrementing along with no input errors

    Controller using lacp with 4 1Gig ports - no errors on the physical interfaces nor on the port-channel interface

     

    Seems like other forum posts point to changing the controller default SAP MTU to 1500 and testing - i dont see the same problem so far (using ICMP tonight) - at each of the two sites, there are two distinct catalyst switch models used with different IOS Trane. (Site #1 uses 4510R+E) - (Site #2 uses WS-6506-E) (line cards omitted for now)

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     



  • 5.  RE: High Fragmentation

    EMPLOYEE
    Posted Jul 31, 2013 01:12 AM

    Matthew.McKenna,

     

    Is your only complaint in that building slow printing?  Fragmentation in itself in wireless does not automatically guarantee that traffic would be degraded significantly or even noticeably.  Application performance, as in ALL applications is much more important to baseline.  

    Typically if there is poor application performance, there are plenty of other issues that can be looked at, before fragmentation would be the main issue.

     

    The giants on the access point ports are the result of the access points doing path MTU discovery.  If you change the MTU to 1500 in the AP system profile, it will stop the giants.

     

    Your indicator that you are having a problem should not be number of packets fragmented, but application performance.

     

     

     

     

     

     



  • 6.  RE: High Fragmentation

    Posted Sep 10, 2013 04:40 PM

    As is stands, we have isolated the problem to a particular version of Catalyst switch and IOS used on an upstream switch. The resolution used currently was to apply L2 VLAN service-policies "input" with a specific class for default traffic. all of the other VLANs on this switch had a service-policy with various classes applied and this one did not. This was realized when we moved one of the APs in this location to a new mgmt VLAN to further isolate possible issues caused by the network itself. When we added the VLAN policy, traffic flows as desired and application performance was no longer an issue. Each of the Access-layer ports had a similiar service-policy "output" applied and it was interesting that only upstream traffic was affected (from the user device).

     

    we did not have this issue say when we used  a 6509-E with sup720 as an upstream switch vs 6509-E with sup2T running newer IOS

    the other key difference was that the access-layer switches in the offending site were 4500s vs 6500s

     


    we are continuing our review but at this point, it appears that unless we mark the traffic on both the interface and the L2 VLAN, we are seeing some odd classification and policy applied to the upstream direction based traffic.