Do we know if this has turned out to be something more? It's here now 2 years, and running 8.5.0.11, I am seeing the same exact thing. Even on the phone with TAC engineer about something different... lol and he's looking into it.
@jhoward, per the quick checklist you did, the only thing that doesn't match what you outlined, was shutting down gi0/0/1-2 down...however the rest fall into spec and I am getting 63% proc on this as well.
(gtw-wlanmm01) [mynode] #show cpuload current
top2 - 09:07:13 up 1:58, 0 users, load average: 2.44, 2.51, 2.51
Tasks: 259 total, 2 running, 209 sleeping, 0 stopped, 1 zombie
Cpu(s): 20.9%us, 24.1%sy, 0.1%ni, 54.7%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 5840800k total, 4859924k used, 980876k free, 55012k buffers
Swap: 262140k total, 0k used, 262140k free, 2065032k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5104 root 20 0 16.3g 24m 9716 S 62 0.4 74:52.84 sos.opusone.elf
4120 root 20 0 43628 5632 3748 R 34 0.1 42:18.08 syslogd
5087 root 20 0 39620 3684 3080 S 22 0.1 24:56.72 sbConsoled
5402 root 20 0 0 0 0 I 7 0.0 2:04.96 kworker/u6:3
(gtw-wlanmm01) [mynode] #show processes sort-by cpu
%CPU S PID PPID VSZ RSS F NI START TIME EIP CMD
63.2 S 5104 5087 17106248 24916 4 0 07:08 01:09:23 00000000 /mswitch/bin/sos.opusone.elf --legacy-mem -c 0x6 -n 1 -b 0000:02:01.0 -- -M0xffff -s0x2 -f0x4 -p0x3 --config -C0x0 -t 0x800000100
35.6 R 4120 4116 43628 5632 4 0 07:08 00:39:07 00000000 /mswitch/bin/syslogd -x -r -n -m 0 -f /mswitch/conf/syslog.conf
21.0 S 5087 1 39620 3684 4 0 07:08 00:23:03 00000000 /mswitch/bin/sbConsoled 0
7.1 I 15207 2 0 0 1 0 08:56 00:00:08 00000000 [kworker/u6:1]
6.5 I 5402 2 0 0 1 0 08:32 00:01:42 00000000 [kworker/u6:3]
Any further developments on this by chance? *Asking for a friend* ;)
------------------------------
Neal Tarbox
------------------------------
Original Message:
Sent: Feb 21, 2019 04:19 PM
From: Jerrod Howard
Subject: Aurba 8.3 vMM and vMD high CPU use.
You can open a TAC case, but I would ensure that:
1. That promiscous mode and forged transmits are enabled
2. There is no vSwitch loop (ala you do not have any of the VMC interfaces on the same vswitch/port group).
3. There's no loop external to the hypervisor.
Seeing the issues across MM and VMC all on the same hypervisor screams one of the above. You can go in and disable all but gig0/0/0, and ensure promiscuous and forged transmits are enabled everywhere that the MM and VMC is.
But TAC can help run this to ground, I have VMCs serving my home network and the largest CPU util process is 4.6% (well within spec)