Wireless Access

last person joined: yesterday 

Access network design for branch, remote, outdoor, and campus locations with HPE Aruba Networking access points and mobility controllers.
Expand all | Collapse all

Best practice design for HW MC backed up by VMC?

This thread has been viewed 6 times
  • 1.  Best practice design for HW MC backed up by VMC?

    Posted Jun 02, 2019 05:46 AM

    Hi,

     

    I currently run a small lab with 2x VMMs in active/standby, and a local 7010 HW controller (servicing 4x APs).  Given I have some spare x86 compute capacity (and I like to tinker), I would like to introduce a VMC as a backup controller for the APs, should the 7010 go down (eg maintenance/upgrade etc).

     

    What is the best practice mechanism to achieve this?  I am aware that clustering/VRRP is not supported between HW & virtual controllers (hopefully roadmap?).  I've been reading the manual but have not yet been able to reach a conclusion for this scenario (it seems having 2 of the same controller is most recommended).

     

    For reference I am running ArubaOS 8.5.0.0 on all devices (bleeding edge for the win).

     

    Cheers

    Clay



  • 2.  RE: Best practice design for HW MC backed up by VMC?

    EMPLOYEE
    Posted Jun 02, 2019 09:38 AM

    Have you tried pointing your access points to the VMC (or a VRRP between the VMC) using a backup lms-ip?



  • 3.  RE: Best practice design for HW MC backed up by VMC?

    Posted Jun 02, 2019 11:53 PM

    Depending upon your availability of resources, and the fact that you mentioned that this is a lab network, here are suggestions.

     

    You have VMM pair using VRRP. That is good for MM redundancy.

     

    Build 2xVMC and configure them as a cluster. Play with clustering. It is "THE" recommended method of redundancy with ArubaOS 8.

     

    Configure the 7010 as a standalone and play with MultiZone.

     

    If you don't want to or can't do 2xVMCs, then you have to resort to the old  redundancy methods; VRRP between MC/VMC, LMS/Backup LMS, or HA. (you are correct, you cannot mix VMC and MC in a cluster).

     

    I hope this helps,

     



  • 4.  RE: Best practice design for HW MC backed up by VMC?

    Posted Jun 03, 2019 06:34 AM

    Thanks guys - I tried to use backup LMS method but for some reason APs cannot connect to the VMC.  I setup the following:

     

    "Prod" AP group - Master LMS=MC, Backup LMS=VMC

    "Lab" AP group - Master LMS=VMC, Backup LMS=MC

     

    The AP in the Lab AP group does not come up at all, and the Prod APs do not fail over if I kill the power to the MC.

     

    Checking logs on the VMC, I see errors about IPSEC, another thread suggested enabling self-signed cert but that did not seem to help.  Any ideas?


    Jun 3 20:27:31 isakmpd[5482]: <103103> <5482> <WARN> |ike| 172.16.0.24:4500-> IKE SA Deletion: IKE2_delSa peer:172.16.0.24:4500 id:3531498362 errcode:OK saflags:0x10000051 arflags:0x5
    Jun 3 20:27:31 isakmpd[5482]: <103103> <5482> <WARN> |ike| 172.16.0.24:4500-> IPSec SA Deletion: IPSEC_delSa SPI:24365900 OppSPI:178ef900 Dst:172.16.0.24 Src:172.16.0.112 flags:1001 dstPort:0 srcPort:0
    Jun 3 20:27:35 sapd[2628]: <311002> <WARN> |AP Attic-215-ex2200@172.16.0.24 sapd| Rebooting: Unable to set up IPSec tunnel to saved lms, Error:RC_ERROR_ISAKMP_N_CERT_ROOTCA_VERIFY_FAILED

     



  • 5.  RE: Best practice design for HW MC backed up by VMC?

    EMPLOYEE
    Posted Jun 03, 2019 07:11 AM

    What method did you use to add the VMC to the existing VMMs?



  • 6.  RE: Best practice design for HW MC backed up by VMC?

    Posted Jun 03, 2019 07:20 AM

    Full-setup, using PSK and then added VMC IP through MM GUI.

     

    Is that the info you were after?



  • 7.  RE: Best practice design for HW MC backed up by VMC?

    EMPLOYEE
    Posted Jun 03, 2019 07:24 AM

    Yes. 

     

    When you SSH into the MM and type "show switches", does the VMC appear?

     

    When the APs fail you should type "show log system 50" on the VMC to possibly get additional details.



  • 8.  RE: Best practice design for HW MC backed up by VMC?

    Posted Jun 03, 2019 07:44 AM

    Yes all looks good from a controller point of view:

     

    (ArubaMM-01) [mynode] #show switches

    All Switches
    ------------
    IP Address IPv6 Address Name Location Type Model Version Status Configuration Sta te Config Sync Time (sec) Config ID
    ---------- ------------ ---- -------- ---- ----- ------- ------ ----------------- -- ---------------------- ---------
    172.16.0.113 2001::1 ArubaMM-01 Building1.floor1 master ArubaMM-VA 8.5.0.0_70258 up UPDATE SUCCESSFUL 0 93
    172.16.0.111 None Aruba7010-01 Building1.floor1 MD Aruba7010 8.5.0.0_70258 up UPDATE SUCCESSFUL 0 93
    172.16.0.114 2001::1 ArubaMM-02 Building1.floor1 standby ArubaMM-VA 8.5.0.0_70258 up UPDATE SUCCESSFUL 0 93
    172.16.0.112 None ArubaMC-02 Building1.floor1 MD ArubaMC-VA 8.5.0.0_70258 up UPDATE SUCCESSFUL 10 93

    Total Switches:4
    (ArubaMM-01) [mynode] #

     

    Nothing too exciting in the log either - a lot of NTP unreachable errors, I did set the source to VLAN 100 (same as NTP server) but it did not seem to fix it.

     

    Jun 3 21:18:59 wms[6628]: <126005> <6628> <WARN> |wms| |ids| Interfering AP: The system classified an access point (BSSID d8:9d:67:ea:c6:1a and SSID HP-Print-1A-Officejet Pro 8600 on CHANNEL 1) as interfering. Additional Info: Detector-AP-Name:Bedroom-215; Detector-AP-MAC:40:e3:d6:a0:fb:c0; Detector-AP-Radio:2.
    Jun 3 21:19:03 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    <snip>
    Jun 3 21:32:25 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    Jun 3 21:32:39 wms[6628]: <126005> <6628> <WARN> |wms| |ids| Interfering AP: The system classified an access point (BSSID 34:af:2c:55:6c:20 and SSID on CHANNEL 157) as interfering. Additional Info: Detector-AP-Name:Lounge-215; Detector-AP-MAC:40:e3:d6:a0:fc:f0; Detector-AP-Radio:1.
    Jun 3 21:32:41 wms[6628]: <126005> <6628> <WARN> |wms| |ids| Interfering AP: The system classified an access point (BSSID 88:40:3b:fb:05:98 and SSID on CHANNEL 6) as interfering. Additional Info: Detector-AP-Name:Lounge-215; Detector-AP-MAC:40:e3:d6:a0:fc:e0; Detector-AP-Radio:2.
    Jun 3 21:32:56 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    <snip>
    Jun 3 21:41:46 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    Jun 3 21:41:54 cli[19345]: USER: admin has logged in from 172.16.0.1.
    Jun 3 21:41:54 sshd[19325]: <199801> <19325> <INFO> |sshd| Accepted password for admin from 172.16.0.1 port 13089 ssh2
    Jun 3 21:42:16 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    (ArubaMM-01) [mynode] #



  • 9.  RE: Best practice design for HW MC backed up by VMC?

    Posted Jun 03, 2019 07:46 AM

    Sorry you meant the log on the VMC - here it is:

     

    (ArubaMM-02) *[mynode] #show log all 50
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.546 [conn96709] build index site1.radio_plan { _id: 1 }
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.547 [conn96709] build index done. scanned 0 total records. 0 secs
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.549 [conn96709] build index done. scanned 0 total records. 0 secs
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.549 [conn96709] build index site1.radio_feasibility { _id: 1 }
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.550 [conn96709] ^I backgroundIndexBuild dupsToDrop: 0
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.550 [conn96709] build index done. scanned 6 total records. 0 secs
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.550 [conn96709] build index site1.radio_feasibility { mac: 1 } background
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.552 [conn96709] build index done. scanned 0 total records. 0 secs
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.552 [conn96709] build index site1.reporting_radio { _id: 1 }
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.553 [conn96709] ^I backgroundIndexBuild dupsToDrop: 0
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.553 [conn96709] build index done. scanned 6 total records. 0 secs
    Jun 3 21:25:01 mongod.27017[6364]: Mon Jun 3 21:25:01.553 [conn96709] build index site1.reporting_radio { mac: 1 } background
    Jun 3 21:25:02 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    <snip>
    Jun 3 21:44:43 ntpwrap: ntpdPollingTimer:535 Upstream servers not reachable via local interface.
    Jun 3 21:45:01 cli[32004]: USER: admin has logged in from 172.16.0.1.
    Jun 3 21:45:01 sshd[31944]: <199801> <31944> <INFO> |sshd| Accepted password for admin from 172.16.0.1 port 1082 ssh2
    (ArubaMM-02) *[mynode] #



  • 10.  RE: Best practice design for HW MC backed up by VMC?

    EMPLOYEE
    Posted Jun 04, 2019 12:01 PM

    You're just using the global license pool, or did you create license pools on your MM?