08-14-2013 09:10 AM
The AP seems to sit with a status of "Provisioning Requested"
We've tried purging the flash, deleting the AP from the database and removing the AP whitelist certificate and letting it re-provision. When it boots up, the entry reappears in the AP database and the certificate gets added to the whitelist automatically (the address the AP booted in has autocert provisioning enabled for it), however the AP still sits in "Provisioning Requested". The "master" provisioning setting should be set by a provisioning profile to "aruba-master.net.cam.ac.uk" but this doesn't happen, so I don't think the AP completes the provisioning.
My suspicion is the AP is somehow faulty, but it doesn't seem to print any errors in the Console or controller.
Can anyone advise of any further steps to take. Other AP's on site have been configured with no problems
thanks in advance
08-22-2013 06:18 AM
You might pursue a TAC case for it, if other APs can start from scratch without issue but this one won't, you might see if TAC knows anything else and if not, get it replaced.
Sr. Techical Marketing Engineer
11-08-2013 06:44 AM
This query was actually from me - I don't know why it recorded "RobDaBank", but maybe I wasn't signed in when I wrote it.
Anyway -- the issue is now resolved in that we can rescue the APs in this state: initially, Aruba RMA'd the first AP we had in this state but then we had 3 more which ended up there.
After a fairly lengthy on-line support session with an Aruba engineer poking at the AP and controllers remotely, they ascertained that the AP was jumping between two controllers and the state "Provisioning Requested" was caused by this. Apparently, there is a separate configuration area inside the APs that you can't get at through the bootloader and that had got out of step with the main one (that you CAN see) which was causing it to jump back and forth. This second area is NOT cleared with a 'purge' or 'factory_reset'.
[I'm not entirely clear of the details or exactly what the situation is, but that's how the engineer described it to me.]
The solution was to create a blank configuration for the faulty AP (in terms of having no VAPs, provisioning profile, LMS IP, etc.) which causes the stuff in the "secret" area of the AP to take precedence and it to come up with that configuration. The engineer did this in a slightly different way from me - but I did it by creating an empty AP group with no local sessions, then setting the "ap-group" boot variable to this group, saving and reseting.
After booting up, the AP returned to the group that I was trying to configure it into before all this started (which I thought I had purged from the AP by blanking it). From then on, the AP was running normally and I could reconfigure it as required.
In the case of a couple of these APs, they were IAPs converted to controller-managed. A factory reset on the AP took them back to IAPs but they remained broken after the reconversion. However, this fixed it.
I feel this state is probably a bug but I'm not particularly bothered as it doesn't happen routinely -- all of the APs failed as part of a conversion from RAPs or IAPs to CAPs; I believe because the CPsec whitelist wasn't in a valid state at the point of conversion.
If the conversion is done too quickly after adding/modifying an entry, the whitelist doesn't have a chance to synchronise across all the controllers. Alternatively, it appears that an entry in the whitelist set as 'approved-ready-for-cert' reverts to an 'unapproved-factory-cert' state after a few hours. Trying to convert APs when things aren't all tickety-boo seems to sometimes cause the APs to end up like this.