I have two MDs clustered and they share the load of my APs. Some terminate to MD1 and establish backup to MD2, and vice versa.
If I reboot MD2, the APs that were terminating there fail-over to MD1 - as designed. The AP doesnt go down, it just resorts to its backup tunnel to its backup controller (as I understand it, at least). The AP doesnt reboot.
However, AMP alerts me that the AP has rebooted (well, it alerts me for all APs that were attached to the MD I rebooted). For example...
When I view the SYSTEM STATUS for one of these APs, I see the following under Cluster Failover Information:
Date Time Reason (Latest 10)
2020-01-03 12:00:10 Delete A-AAC:128.198.8.55, cluster enabled=1. fail-over to 128.198.8.53, sby status=1
2020-01-03 12:06:48 AP move to S-AAC fail-over to 128.198.8.55, sby status=1
There is nothing under reboot or rebootstraps for todays date. Only fail-over information with todays date.
In AMP, if I view the Alerts and Events tab for the AP, I see this:
TIME USER EVENT
Fri Jan 3 12:15:53 2020 System Status changed to 'OK'
Fri Jan 3 12:15:53 2020 System Up
Fri Jan 3 12:08:13 2020 System Status changed to 'Controller is Down'
Fri Jan 3 12:08:13 2020 System Down
Fri Jan 3 12:08:13 2020 System Device Down: Device: lapl-515-W01: Device Type is Access Point (Warning)
Fri Jan 3 12:06:52 2020 System AP's associated controller is changed from mc1 to mc2
Fri Jan 3 12:04:45 2020 System AP's associated controller is changed from mc2 to mc1
Yet if I look at the APs device detail page, it still shows an uptime of 52 days (and not a time insinuating it just rebooted)
I guess my question is - why does AMP think the AP goes "DOWN" and reboots, when it is merely changing controllers. What happens is I am getting flooded with "down AP" email alerts, when these APs never really went down, but only changed controllers.