I was watching this great video series (aruba OS 8.9) from John. In this video: https://www.youtube.com/watch?v=ziYmGBu1VWM&t=32s
i am not able to understand why we are creating 3 different VRRP groups for CoA? there should be one or else what is the benefit? he mentioned that its for CoA from ClearPass but if we are using separate VRRP instances for each VMC where each VMC is acting as the master for its own VRRP group then how it is different if i use VMC own IP for CoA?
Hi,if all VMC's are up in the cluster - there is no difference.But if one of the cluster member fails - that source-IP would be no longer exist and CoA requests will not reach the rest of the clustermembers.utilizing a VRRP instead will make sure that this will be the case./Jochem
Morning, maybe some of our live config will help, and maybe you know all this already.There is a bit of leap of faith I had to take here, and it may not help with your question about logic.It strikes me that they way Aruba like 'VRRP' config to work with respect to CoA isn't really what we think of as traditional VRRP with routers.
I struggled to get CoA working for our Guest Wifi for ages because I just couldn't join the dots with this VRRP-IP business as it doesn't make sense as VRRP.For some reason, even though the controllers are all on the same VLAN, and there's a cluster VRRP IP address (.11 in our case), the recommendation was/is to add a separate IP within the same VLAN (628 in our case) to act as the IP address CoA uses for 'VRRP'. Somehow this puts a leg (interface) of the controller onto the VLAN for CoA to use. I think VRRP-IP is the wrong name to use, and it should be called CoA-IP but what do I know!
I'm not sure why the VRRP-IP can't be the same as the IP of the controller, and I didn't test what happens if it is set the same (or even if the controller allows it to be set as the same without throwing an error).
Here's some output from one of our clusters. Note that the Starting VRRP ID of '11' is what becomes the Virtual Router ID in the CLI, and gets assigned automatically, and increments, for each controller added to the virtual router pool with a VRRP-IP address specified. I don't think this confusion of language helps with the understanding of logic; it's a confusion of terms that should be quite separate.
We keep our starting IDs (Virtual Router IDs) and possible ranges distinct across clusters, actually coinciding them with the cluster's LMS IP final octet (in this case ID 11 coincides with LMS 10.25.112.11).
I never did find out the significance of the Group number (3 in our case below), as functionality appears to work the same regardless of what's entered for the group number.
Mobility Controller Cluster Group
Group is used to influence the assignment of the Standby anchoring for AP and User tunnels. As an example, you've two L2 high-speed connected datacenters, six AOS 8 Mobility Controllers, and want to create a single cluster with three controllers in each DC. For HA purposes, you'd rather not have the Active and Standby connections in the same DC as that could lead to connectivity impact in some outage situations. So in the cluster configuration you set controllers in one DC to 'group 1' and the other controllers in the other DC to 'group 2'. Now any AP or User tunnels will form their Standby connections to a controller in the group other than where the Active connection is.
VRRP is primarily used for two purposes in an AOS 8 campus cluster:
If you set the cluster configuration to use a VRRP IP then the VIP will be activated when a cluster of at least two nodes is operating. All RADIUS requests for the purpose of authenticating connecting devices will automatically use the cluster VRRP IP as the NAS-ID in the RADIUS request. This will inform the RADIUS server of the proper IP address to use for any Dynamic Authorization messages. Each cluster node will be master for their associated VRRP IP, and the backup role will be handled by one of the other cluster nodes. This is done to insure that at least one active node in the cluster can take action on the DynAuth request if the primary node has failed.
Typical IP address usage in a cluster, based on how I prefer to set them up:
© Copyright 2023 Hewlett Packard Enterprise Development LPAll Rights Reserved.