Hello,
I've done a few cluster live upgrade successfully and watch the entire process. One time, during a a pair of cluster live upgrade, the first controller that got upgraded did not comeback due to hardware failure. Even though it did not comeback, the live upgrade continued and rebooted the other controller causing the site to go down. When the other controller came back with the upgraded software all the APs re-joined, upgraded, and rebooted again. My question is, how would you do a manual upgrade of a cluster the way it does it on the live upgrade? Below is the manual upgrade that I did on our test environment and I have to disable cluster on one controller. On live upgrade I did not see the cluster being disabled on any of the controllers.
Step 1 - Check Site AP groups
Step 2 - Move all AP's to Controller 1
- apmove all target-v4 controller1-IP source-v4 controller2-IP
Step 3 - On the Controller 2 change AP group LMS to point to itself
- Controller 2 -> Configuration ->AP-group -> LMS = IP of 2nd controller
Step 4 - Disable cluster on Controller 2
- Controller 2 -> Configuration -> Services -> Clusters -> cluster group-membership = set to none
Step 5 - Upgrade Controller 2
- Login via https directly to the controller ip
- Maintenance -> Upgrade -> local File - > browse new OS -> Choose unused Partition -> do not reboot controller -> Upgrade
- Once upgrade is complete reboot controller manually
- Verify that the controller is on new code
Step 6 - Wait for the controller to fully sync before moving APs
Step 7 - Move AP's to the Controller 2 until all the AP's are upgraded
Step 8 - Upgrade Controller 1 to new code (Follow steps 5-6)
Step 9 - Enable cluster on Controller 2
The steps above worked for me but I'm just wondering if there is a way to do this manually the way the live upgrade process is done.