I forget where I got this from, sorry I cant reference/give credit :)
See if below answers your questions:
- Before doing this, make sure you have backups of your controller configs just in case.
- Verify you can ssh to the controllers directly and login with admin account or RADIUS/TACACS+ if you've configured that.
- You will need to update ClearPass or whatever your TACACS+/RADIUS server is ahead of time with the new IP address for the Controllers.
- For ClearPass this is under Configuration->Network->Devices. Don't forget to add the new VRRP-IP address you plan to use under the Services->Clusters in step 12d below.
- I would also recommend having console access if possible.
- You can add both of the controllers new IPs to the Mobility Master/Conductor ahead of time. See step 1 below
- Confirm controllers are L2 connected ahead of time by either sshing directly to the controllers or SSHing to Mobility Conductor and issuing "logon <ip address>" and then running "show lc-cluster group-membership". It should display l2-connected.
- I would plan for this to be an outage. The APs are going to end up rebooting with pulling controllers out of the cluster, creating a new cluster, and adding them back in. Discussed in step 11 below.
- If changing subnets, you will also need to update your ip default-gateway. This is taken care of in step 5 below.
- Update the controller discovery mechanism used by the APs with the new IP address if you are using DNS or DHCP for discovery. Be sure to add the IP address of both controllers.
- In Mobility Master/Conductor add the new controller IP to the list of controllers for each, with IPSec Key. You can add all of them in advanced.
- Log into the UI, go to the Mobility Master/Conductor hierarchy, (The highest one) and click Controllers->Plus sign in the bottom left->
- Leave Authentication as IPsec Key, type in new IP address, type in the IPsec Key and Retype IPsec key (this is an arbitrary key, you will match it on the controllers in step 4)->submit
- To begin this change, first, remove the first controller from cluster.
- Remove first controller from the cluster configuration at the Controller level of the hierarchy
- Click on the Controller name in the hierarchy on the left, then Configuration->Services->Clusters
- Set Cluster group-membership to None
- Submit, then Pending Changes, then Deploy Changes
- Then delete from the group subfolder level
- Click on the group folder in the hierarchy on the left, then Configuration->Services->Clusters
- Select the Cluster Name
- In the Basic window that opens below, select the controller you just removed above and then click the trash can icon to delete it
- Submit, then Pending Changes, then Deploy Changes
- Second, change the Controller-IP Vlan for the first controller
- SSH to mobility master/conductor
- cd <name of the controller> ("cd ?" may help identify the name of the controller)
- "config t"
- "controller-ip vlan <new vlan id>", hit y, don't write mem yet
- Third update masterip (mobility master/conductor IP address) with new source interface
- while still in the same CLI context as above, "masterip <mobility-conductor ip> ipsec <ipsec key> interface vlan <new vlan id>", type y, don't write mem yet
- (Optional If you are changing IP address to a new subnet) Change Default Gateway to new subnet
- while still in the same CLI context as above, "no ip default-gateway <old gateway address>"
- "ip default-gateway <new gateway address>" don't write mem yet
- (Optional) Change Hostname of controller if desired
- while in the same context as above, "hostname <new hostname>"
- Verify config changes before committing them
- "show configuration pending"
- "Write Mem" and "exit" out of config t (write mem will generate a reboot in several seconds to a few minutes, be patient)
- Setup ping to old and new IP addresses and go to console of controller if available. Wait until controller is back up. This will take between 5-10 minutes.
- Verify its using new IP Address and that the IPSec communication with the Mobility Conductor is working
- via Console, SSH, or MDconnect/logon to the new IP address of the controller, issue the following commands to verify configuration took:
- "show switches" It may initially show UPDATE REQUIRED, if everything is working, this should eventually transition to UPDATE SUCCESSFUL after several seconds to a few minutes, if doesn't jump to 10.b.iii or after step 14
- "show controller-ip" or "show run | inc controller-ip"
- "show run | inc masterip"
- on the Mobility Conductor CLI
- "exit" out of config t if you haven't already
- "show switches"
- if it's still showing the old IP and down:
- Log into the UI, go to the Mobility Master/Conductor hierarchy, (The highest one) and click Controllers->New IP address->retype the IPsec Key and Retype IPsec key->submit
- Wait about a minute or two and then do the "show switches" again in the CLI. It should show up with the new IP address and the Configuration State should finally transition to UPDATE SUCCESSFUL like the others
- Once "show switches" on the Mobility Conductor shows the new IP and UPDATE SUCCESSFUL, you can move on to adding the controller to the Cluster. You will likely have to create a new Cluster, especially if you are using or plan to use VRRP in the clustering context for RADIUS communication with a NAC. This is because VRRP requires the same vlan and are changing vlans, so one controller will be in the new vlan and the other controller will be in the old vlan until both are moved.
- Create new Cluster
- In the UI, click the group sub-folder under Managed Networks where the controller resides
- Click Services, the first page that comes up with be the Clusters Tab. Click the Plus sign in the bottom left of the Clusters box
- Give the new cluster a name, then hit the plus sign in the bottom left of the Controllers box
- Select the new IP address, select the group, add your new VRRP-IP and the new VLAN, hit ok, submit, then click Pending Changes and Deploy Changes
- Add the new controller to the new Cluster Profile
- Click to the controller under the group hierarchy on the left hand side, it should take you straight to Services->Clusters->Cluster Profile
- Select the new Cluster Profile in the drop down menu
- Submit, Pending Changes, Deploy Changes
- Repeat steps above for the second controller, Don't forget to move the console connection!
If you run into issues where one of the controllers gets rolled-back to original config, but Mobility Master/Conductor still has the new config, you can fix it on the controller, from CLI with the following:
!!!!!DON'T DO ANYTHING ELSE IN DISASTER-RECOVERY MODE. THIS IS A GOOD WAY TO BREAK MOBILITY MASTER/CONDUCTOR!!!!!!
(MC2) #disaster-recovery on
Entering disaster recovery mode
(DR-Mode) [mm] #configure t
Enter Configuration commands, one per line. End with CNTL/Z
(DR-Mode) [mm] (config) #controller-ip vlan 2
Error: interface VLAN 2 has static ip address configured at group level /mm
(DR-Mode) [mm] (config) #cd /mm/mynode
(DR-Mode) [mynode] (config) #controller-ip vlan 2
This configuration change requires a reboot. Do you want to continue[y/n]:y
Mobility Master will be rebooted on write memory.
(DR-Mode) ^[mynode] (config) #cd /mm
(DR-Mode) ^[mm] (config) #masterip 192.168.2.5 ipsec Aruba123 interface vlan 2
Change in the masterip configuration requires device to reload. Make sure the modified configuration ensures connectivity to the Master. Do you want to continue [y/n]: y
(DR-Mode) ^[mm] (config) #no ip default-gateway 192.168.100.1
(DR-Mode) ^[mm] (config) #ip default-gateway 192.168.2.1
(DR-Mode) ^[mm] (config) #exit
(DR-Mode) ^[mm] #write mem
(DR-Mode) ^[mm] #
(continued on next page)
[22:16:20]:Shutdown processing started
Because we rebooted, we didn't need to turn disaster-recovery off. If we hadn't rebooted, you can turn DR off with the command "disaster-recovery off"
Lead Mobility Engineer @WEI
ACCX 1271| ACMX 509| ACSP | ACDA | MVP Guru 2021
If my post was useful accept solution and/or give kudos
Sent: Jan 07, 2022 02:36 PM
From: Michael Naylor
Subject: changing controller IP addresses
We are in the midst of an IP re-address here on campus and I've come to the wireless controllers.
We have a one MM and three MD setup using CPPM for auth and RADIUS.
Also the MDs are clustered with a VIP which we use for aruba-master.
I'm assuming that I can add another IP address to each controller ahead of time and make sure they are NADs in the CPPM and that my services will work with the new IP.
When cut-over time happens, how do utilize those new IPs? Do I need to tell the MM or will the MDs contact it?
Is there an HPE doc on this?
Trying to cover all my bases for a smooth transition.