12-07-2011 08:29 AM
I'm currently using airwave to monitor 95 devices including visualRF and RAPIDS. Airwave is running in a virutal environment and we originally assigned it 4 gig of memory. I have noticed that it has become sluggish and it was consuming all 4 gig of memory.. We have now assigned airwave 6 gig of memory based on the AWMS sizing guide, however, it is now consuming all 6 gig of memory. It is incredibly slow and some pages won't load. Has anyone else experienced this or have any suggestions on what may be going on?
12-07-2011 08:33 AM
I had to move to a physical box, though I am monitoring several hundred devices. I did start with a VM.
How is your disk IO for this VM? Please take a look at that. Also, I'd say push it up to 8GB of RAM if you can. Is the VM running on 15K SAS or FC disks? It really needs that IO speed.
12-07-2011 08:59 AM
How does swap usage look? AirWave and Linux try to take advantage of all of the available RAM in order to cache things in memory, so as long as you're not dipping into swap space too much, you should be fine.
Do you have vmware tools installed?
12-07-2011 09:00 AM
In Airwave, go to System -> Performance
How much swap space do you have? (total & used)
Take a look at the graphs there. Is anything else peaking?
Also, which version of Airwave are you running?
12-07-2011 09:17 AM
A couple of things:
1. On the AMP Setup -> General page, you can configure the number of monitoring processes. Each of these processes will use a decent amount of memory, so it's important not to have too many when you're bumping up against a RAM limitation. For your size of deployment, I recommend setting that to 1.
2. According to our latest (7.4) hardware sizing guide, the minimum RAM for monitoring 100 devices is 6GB. Recommended is 8GB. Increasing RAM could help too. More is essentially always better.
04-19-2012 11:39 AM
I had this problem just today. Airwave GIU was very slow and client information was not accutate. The RAM and swap were both pegged. I called support and they found a process was hung and was holding onto over 80% of my memory. The process was Daemin::TupleScheduler. The support engineer made some adjustments and the problem seems to be solved. I'm not sure what adjustments he maid. It looked like just stopping and restarting the sevices and database. I had rebooted the server prior to calling support but that did not fix the issue.
Log into the CLI with root and run top -c and see if any processes are holding onto a bunch of memory.