Our architecture:
4xM3 (2 each in 2 6000 chassis) each chassis is in a separate data center.
1 pair is active as MASTER-BACKUP
1 pair of local controllers are acting as MASTER-BACKUP as well.
This gave us one chassis as a primary and a second chassis acting as our dr backup.
Given our architecture:
Here's a quick note about what I experienced when upgrading from 6.1.3.4 to 6.1.3.7 in our environment.
1) On the first attempt, I upgraded partition 0 on all 4 controllers and rebooted the two backups immediately.
2) Once they came back online, I upgraded the two primary controllers for failover.
3) Once all four controllers came back online, a small subset of APs were offline and never came back after about 40 minutes.
4) I rolled back all four controllers to 6.1.3.4 and all of my APs came back.
5) Since I read the "Known Issues and Limitations" of the release notes and saw that if there is too much traffic on the VRRP VLAN that this could happen.
6) On the second attempt, I set the boot partition back to partition 0 and reloaded three of the four controllers immediately. Once they came back online, I reloaded the remaining controller.
7) This time, all of my APs came back almost immediately.
My lesson learned is that, given the nature of our architecture, by reloading three of the four controllers and waiting for them to come back online I was able to reduce the traffic on the VRRP VLAN so that the APs could get their configs when I reloaded the final controller.
Granted this worked, it was less than seamless and is not a recommended method of upgrade during production hours.