Airwave is reporting slow authentication for all of my access points, on the order of 1700-7600ms. DHCP is showing anywhere from 265-1700ms. I'm wondering how this is determined and what I can do to fix it.
I verified the radius server selected for my dot1x authentication is the local one. A ping from my controller to the server averages 158ms. A ping to the DHCP server is about the same at 157ms.
What I find odd here is that the NPS server that authenticates our user traffic is on the same floor as the controllers and AP's with the highest auth response time. The DHCP server is on the same hypervisor.
Any thoughts or misconfigurations I may have on my controller, or is this really just the slowness of the servers?
Apologies -- I had someone else do the basic ping checks and assumed they were correct. A typical ping to our NPS server is:
(aruba-01) #ping 10.1.1.1Press 'q' to abort.Sending 5, 92-byte ICMP Echos to 10.1.1.1, timeout is 2 seconds:!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 0.134/0.1638/0.222 ms
One to the DHCP server is:
(aruba-01) #ping 10.1.1.2Press 'q' to abort.Sending 5, 92-byte ICMP Echos to 10.1.1.2, timeout is 2 seconds:!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 0.135/0.1566/0.213 ms
So it is technically less than 1ms each way. Is there any way to see in Airwave how it gathers/reports these metrics, or is this perhaps a server side issue?
I am going to run a wireshark capture on a device that joins our corporate SSID network to see how long between frames it takes to get a DHCP address.
Seems pretty speedy to me except for the ACK.
Since it does not appear to be our wired network causing the latency I'll assume it is the server itself. Is there any way to drill into the metrics that Airwave has to show that type of information, or would I have to do another packet capture on the NPS server port to verify the time it takes for each frame to ingress/egress?
I am experiencing the same issues. How would you start troubleshooting this?
I attributed this to a known bug in Cisco switches that caused buffering issues. We noticed that several million errors were coming in on ports throughout our network, but when we investigated it was really just output drops/discards due to buffer space limitations. We recently upgraded our switches to 3.6.6E but found the bug was still present, so I'm hoping with the next release we can validate that this is not a switching issue. I never followed up with the packet capture step.
The best option to troubleshoot this with a packet capture imo (if you have Cisco switches like we do) is to run a monitor capture on the port leading to your server where you're seeing these issues. It will run inband captures from the IOS CLI, then you can dump to the bootflash and copy to a local machine for wireshark review. Once you have that, you probably just need to create a view in wireshark to show the deltas from "time since first packet" and "time since last packet". That should show you exactly where the latency is occurring.
At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes.
© Copyright 2021 Hewlett Packard Enterprise Development LPAll Rights Reserved.