I have a strange situation with a controller cluster not using a defined mgmt-server policy.
We have an MM to MD setup. 2 separate clusters of MD's (2x7010, 2x7205). At the top of the tree we defined the mgmt server profile and it pushes down to all 4 controllers:
(<hostname>) #show configuration effective | include mgmt-server
mgmt-server primary-server 10.20.7.10 profile default-acp
mgmt-server primary-server 10.20.58.35 profile default-amp
mgmt-server primary-server master profile default-controller transport udp
mgmt-server profile "default-acp"
mgmt-server profile "default-ale"
mgmt-server profile "default-amp"
mgmt-server profile "default-amp-backup"
mgmt-server profile "default-controller"
However on 1 cluster (the 7010s) the controllers are not using the default-acp profile:
(<bad controller>) #show mgmt-servers
List of Management Servers
--------------------------
Primary Server Profile Transport-method Server Status
-------------- ------- ---------------- -------------
10.20.7.10 default-controller udp
10.20.58.35 default-amp udp
Num Rows:2
thus we're not seeing some stats (WLAN's, Clients)
the other cluster is working as expected:
(<good controller>) #show mgmt-servers
List of Management Servers
--------------------------
Primary Server Profile Transport-method Server Status
-------------- ------- ---------------- -------------
10.20.7.10 default-acp udp
10.20.58.35 default-amp udp
Num Rows:2
The "bad" controller is up in MM (the 10.20.7.10), configs are synced and good, etc. It's just not using the correct profile.
Before I open a TAC case (or just change the defaul-controller profile to include the additional status items), any thoughts?
Thanks!