Wireless Access

last person joined: 20 hours ago 

Access network design for branch, remote, outdoor, and campus locations with HPE Aruba Networking access points and mobility controllers.
Expand all | Collapse all

Aurba 8.3 vMM and vMD high CPU use.

This thread has been viewed 9 times
  • 1.  Aurba 8.3 vMM and vMD high CPU use.

    Posted Oct 27, 2018 06:37 PM

    Is there a reason these Aruba VMs pull so much CPU even at idle?  I am trying to run a small lab but these VMs are hogs.  I have seen this on VMware, KVM, and Hyper-V.



  • 2.  RE: Aurba 8.3 vMM and vMD high CPU use.

    EMPLOYEE
    Posted Oct 28, 2018 07:49 AM

    We need more specifics about your setup, the hardware it is running on and what you have configured.  I have an vMM running at 9% most of the time.



  • 3.  RE: Aurba 8.3 vMM and vMD high CPU use.

    Posted Oct 28, 2018 10:20 AM

    Sure, what do you need to know?

     

    Here are some screenshots. Still waiting on licenses so these VMs aren't doing anything...

     

     

     

    With Aruba VMs .pngVMs offVMs offVMs onVMs on



  • 4.  RE: Aurba 8.3 vMM and vMD high CPU use.

    EMPLOYEE
    Posted Oct 29, 2018 03:51 AM

    From what I can interpret, the only "high" process I see is java at 92% when you are first booting in the last screenshot.  Everything else, at least from the screenshots, looks normal.



  • 5.  RE: Aurba 8.3 vMM and vMD high CPU use.

    Posted Feb 13, 2019 03:59 AM

    I'm having the same behaviour over different MM and MC, all taking ~ 1.2-1.5Ghz continuously. Environment is set up for lab purposes and there are only 10 clients and 2 APs connected.

     

    Command "show processes sort-by cpu" always shows the same process taking a huge chunk of CPU:

     

    (VMC-241) #show processes sort-by cpu


    %CPU S PID PPID VSZ RSS F NI START TIME EIP CMD
    61.8 S 22238 22221 1677488 62196 4 0 Jan31 7-21:29:47 b199d46d /mswitch/bin/sos.shumway.elf -c 0xe -n 1 -b 0000:03:00.0 -- -M0xffff -s0x2 -f0x4 -p0x7 --config -D1 -d0x8 -C0x0 -t 0x800000100

     

    (MM-2-252) *[mynode] #show processes sort-by cpu


    %CPU S PID PPID VSZ RSS F NI START TIME EIP CMD
    39.1 S 5658 5641 1626792 21900 4 0 Jan31 4-23:34:24 eebbf46d /mswitch/bin/sos.opusone.elf -c 0x6 -n 1 -b 0000:03:00.0 -- -M0xffff -s0x2 -f0x4 -p0x3 --config -C0x0 -t 0x800000100

     

    Any ideas how to troubleshoot further?

     

    Running ESX 6.5 and ArubaOS 8.4.0.0



  • 6.  RE: Aurba 8.3 vMM and vMD high CPU use.

    EMPLOYEE
    Posted Feb 21, 2019 04:19 PM

    You can open a TAC case, but I would ensure that:

     

    1. That promiscous mode and forged transmits are enabled

    2. There is no vSwitch loop (ala you do not have any of the VMC interfaces on the same vswitch/port group). 

    3. There's no loop external to the hypervisor. 

     

    Seeing the issues across MM and VMC all on the same hypervisor screams one of the above. You can go in and disable all but gig0/0/0, and ensure promiscuous and forged transmits are enabled everywhere that the MM and VMC is. 

    But TAC can help run this to ground, I have VMCs serving my home network and the largest CPU util process is 4.6% (well within spec)



  • 7.  RE: Aurba 8.3 vMM and vMD high CPU use.

    Posted Feb 16, 2021 11:52 AM
    Do we know if this has turned out to be something more?  It's here now 2 years, and running 8.5.0.11, I am seeing the same exact thing.  Even on the phone with TAC engineer about something different...  lol  and he's looking into it.

    @jhoward, per the quick checklist you did​, the only thing that doesn't match what you outlined, was shutting down gi0/0/1-2 down...however the rest fall into spec and I am getting 63% proc on this as well.

    (gtw-wlanmm01) [mynode] #show cpuload current


    top2 - 09:07:13 up 1:58, 0 users, load average: 2.44, 2.51, 2.51
    Tasks: 259 total, 2 running, 209 sleeping, 0 stopped, 1 zombie
    Cpu(s): 20.9%us, 24.1%sy, 0.1%ni, 54.7%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 5840800k total, 4859924k used, 980876k free, 55012k buffers
    Swap: 262140k total, 0k used, 262140k free, 2065032k cached

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    5104 root 20 0 16.3g 24m 9716 S 62 0.4 74:52.84 sos.opusone.elf
    4120 root 20 0 43628 5632 3748 R 34 0.1 42:18.08 syslogd
    5087 root 20 0 39620 3684 3080 S 22 0.1 24:56.72 sbConsoled
    5402 root 20 0 0 0 0 I 7 0.0 2:04.96 kworker/u6:3


    (gtw-wlanmm01) [mynode] #show processes sort-by cpu


    %CPU S PID PPID VSZ RSS F NI START TIME EIP CMD
    63.2 S 5104 5087 17106248 24916 4 0 07:08 01:09:23 00000000 /mswitch/bin/sos.opusone.elf --legacy-mem -c 0x6 -n 1 -b 0000:02:01.0 -- -M0xffff -s0x2 -f0x4 -p0x3 --config -C0x0 -t 0x800000100
    35.6 R 4120 4116 43628 5632 4 0 07:08 00:39:07 00000000 /mswitch/bin/syslogd -x -r -n -m 0 -f /mswitch/conf/syslog.conf
    21.0 S 5087 1 39620 3684 4 0 07:08 00:23:03 00000000 /mswitch/bin/sbConsoled 0
    7.1 I 15207 2 0 0 1 0 08:56 00:00:08 00000000 [kworker/u6:1]
    6.5 I 5402 2 0 0 1 0 08:32 00:01:42 00000000 [kworker/u6:3]



    Any further developments on this by chance?  *Asking for a friend*  ;)

    ------------------------------
    Neal Tarbox
    ------------------------------