Hi,
on first with poor health:
(lzarubamc1) *#show lc-cluster group-membership
Cluster Enabled, Profile Name = "lzcluster"
Redundancy Mode On
Active Client Rebalance Threshold = 50%
Standby Client Rebalance Threshold = 75%
Unbalance Threshold = 5%
AP Load Balancing: Enabled
Active AP Rebalance Threshold = 20%
Active AP Unbalance Threshold = 5%
Active AP Rebalance AP Count = 50
Active AP Rebalance Timer = 1 minutes
Cluster Info Table
------------------
Type IPv4 Address Priority Connection-Type STATUS
---- --------------- -------- --------------- ------
peer 192.168.96.53 128 L3-Connected CONNECTED (Member, last HBT_RSP 65ms ago, RTD = 0.508 ms)
self 192.168.96.52 128 N/A CONNECTED (Leader)
and on the second:
(lzarubamc2) *#show lc-cluster group-membership
Cluster Enabled, Profile Name = "lzcluster"
Redundancy Mode On
Active Client Rebalance Threshold = 50%
Standby Client Rebalance Threshold = 75%
Unbalance Threshold = 5%
AP Load Balancing: Enabled
Active AP Rebalance Threshold = 20%
Active AP Unbalance Threshold = 5%
Active AP Rebalance AP Count = 50
Active AP Rebalance Timer = 1 minutes
Cluster Info Table
------------------
Type IPv4 Address Priority Connection-Type STATUS
---- --------------- -------- --------------- ------
self 192.168.96.53 128 N/A CONNECTED (Member)
peer 192.168.96.52 128 L3-Connected CONNECTED (Leader, last HBT_RSP 39ms ago, RTD = 0.000 ms)
Original Message:
Sent: Oct 15, 2022 07:22 PM
From: Colin Joseph
Subject: MC 7210 poor health
Go on the commandline of one controller and type "show lc-cluster group-membership" and see if it sees the other controller.
------------------------------
Any opinions expressed here are solely my own and not necessarily that of Hewlett Packard Enterprise or Aruba Networks.
HPE Design and Deploy Guides: https://community.arubanetworks.com/support/migrated-knowledge-base?attachments=&communitykey=dcc83c62-1a3a-4dd8-94dc-92968ea6fff1&pageindex=0&pagesize=12&search=&sort=most_recent&viewtype=card
Original Message:
Sent: Oct 15, 2022 05:22 PM
From: Dirk Rzimski
Subject: MC 7210 poor health
hi gents,
I am running a two node cluster of 7210 on 8.6.0.17 with 250 access points.
Each controllers is connected with one interface to a data center switch.
But one controllers is on poor health saying: "Controller has at least one uplink with poor health".
Looking on the switch interface everything is looking fine at my pov.
How is poor health for one uplink defined? What needs to happen on an interface to get rated as poor?
I just updated from 8.6.0.9 to 8.6.0.17 but nothing changed.
Looking at the mobility conductor dashboard, it says no client is connected to the controller with poor health but 177 are connected to the one with good health. Is that normal or are the clients getting connected to the good controller because the other one is on poor health?