Can anyone explain what the effect of grouping managed devices in a LC cluster?
In the user guide the 4 controllers in a cluster are grouped into two groups but I can't see any explanation of what effect his has.
I wonder if it affects the load balancing or redundancy in some way.
The value of the parameter is an integer and the range is 1-12. The value 0 is the unset value if you do not want to group the managed devices.
The following command adds the managed devices to a group profile:
(host) [md] (config)lc-cluster group-profile cluster6
(host) [md] (Classic Controller Cluster Profile "cluster6") controller 192.168.28.22 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-vlan 0 group 1
(host) [md] (Classic Controller Cluster Profile "cluster6") controller 192.168.28.23 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-vlan 0 group 1
(host) [md] (Classic Controller Cluster Profile "cluster6") controller 192.168.28.24 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-vlan 0 group 2
(host) [md] (Classic Controller Cluster Profile "cluster6") controller 192.168.28.26 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-vlan 0 group 2
It allows you to split a cluster into groups so that load balancing will be done within those groups, instead of the whole cluster. If you have two controllers in one rack and two controllers in another rack, you can group them, so that if the two groups of controllers stop talking to each other they still function independently as individual clusters.
It is basically for people who want to micro-manage a cluster into two clusters. In my opinion It seems to be a bad idea except in the minority of situations. Why? Because it requires a very good understanding of clustering and most people don't have that. It is also yet another type of behavior to keep track of (and calculate) when you are depending on many access points to be up 100% of the time.
When you see a parameter that is not explained in detail, typically it is a knob for a limited use case, like this one is.
My understanding of Cluster groups is different (I may be wrong, it won't be the first time, just don't tell my wife). Let's say that you have 4 cluster members MC1, MC2, MC3, and MC4. And let's say that MC1 and MC2 are in the West closet and MC3 and MC4 are in the East Closet.
MC1 MC2. MC3 MC4
When the cluster leader assigns the AP anchor controller (AAC) and Standby AP anchor controller (S-AAC), it is possible for the AAC and S-AAC to point to MC1 or MC2. This logic goes for the User anchor controller (UAC) and Standby User anchor controller (S-UAC).
If the AAC and S-AAC are pointing to one closet, and if that closet goes down, you will have a problem, as the AP will not be able to fail over to it's S-AAC, since both the AAC and S-AAC MCs are down. Again, this would be a similar problem with the users as the UAC and S-UAC would be in the same closet, and both the UAC and S-UAC MCs are down.
By puttting MC1 and MC2 in one cluster group (which is literally just assigning a group number to the two MCs during the definition of the cluster), and putting MC3 and MC4 in a different cluster group, when the AAC and S-AAC are assigned to MCs, these will be assigned to controllers in different cluster groups. (Again, same concept for assigning UAC and S-UAC). In this situation, if one of the closets goes down, any AAC or UAC assignements in that closet, will have their standby assignments (S-AAC or S-UAC) assigned to a controller in a different group, which are the MCs in the other closet which is still up and running.
I hope this helps,
Thanks, that is exactly what I see in our cluster. The active APs and clients in group 1 have standby assignments in group 2 and vice versa.
This is my understanding of the feature as well. Make sure your AP's and clients are connected to different fault domains within your datacenter.
And I'm pretty sure, customers who think about fault domains will understand this configuration.
Hope I'm not hijacking this thread. I have a scenario which I am in two minds after reading this solution. I was also wondering what was the group number used for.
Using the setup of East and West and 2 MDs per side. It is desired that all East APs terminate to East MDs and same for all West APs (to West MDs). If site fails, e.g. East, then all East APs failover to West MDs. And when site recovers, the APs recovers back to the desired termination.
I was thinking of 2 clusters and have 2 AP groups (East AP Group and West AP Group). In each AP group, I configure the LMS and Backup LMS to be the respective VRRP IP, e.g. East AP Group LMS IP = East MDs VRRP and Backup LMS IP = West MDs VRRP with pre-emption enabled.
Would the above work or I am better off having a single cluster and use the group number to separate the MDs?
Thanks for your advice.
If your East and West and geographically separate (different RF domains), I would recommend the two clusters design, since we cannot control which controller is the AAC for a given AP in a single cluster design.
Leveraging the LMS and bkup-LMS as you described will work to create the redundancy for your APs in case one of the clusters fails.
As makariosm stated, if you need roaming between the groups of APs, then it should be a single cluster. If you do not need roaming between them, or don't have the ability to roam between them, then there is no need for a single cluster. If you decide to do separate clusters, if the APs fail from one cluster to the other, you need to make sure that the MCs at the new cluster can support the additional number of APs along with all necessary licenses and VLANs that the APs require.
Thanks very much, David. Understood.
Thanks for your reply. Yes, East West are geographically separated. Got it. Thanks much.
At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes.
© Copyright 2021 Hewlett Packard Enterprise Development LPAll Rights Reserved.