Network Management

last person joined: 20 hours ago 

Keep an informative eye on your network with HPE Aruba Networking network management solutions
Expand all | Collapse all

Replacement server for Airwave

This thread has been viewed 2 times
  • 1.  Replacement server for Airwave

    Posted Jun 29, 2016 01:52 PM

    Hello fellow Airheads,

     

    I have a question regarding upgrading our Airwave server.  Our current server has melted down again.  Aruba TAC has worked on this server more times than I can count.  The database tables grow too large, nightly maintenance fails (vacuuming), backups fails, disk space runs out, and it won't run.  If it's not that it's VisualRF being out of sync with AP's or the server not polling our controllers consistently.

     

    Currently, it is an HP BL460c G7 with 2 X5670 6 core processors, 192GB memory, and 1TB of disk space on our HP storage array connected over fibre-channel.  I use the server for monitoring and firmware upgrades (we are controller-based).  TAC tells me the server doesn't need to be replaced but I no longer have the patience to continually babysit it.

     

    Reading through other posts, Mr. Gin has outlined some questions which I'll answer:

    1) number of monitored infrastructure devices (switches/controllers/APs) = 3400
    2) client behavior (office [same clients everyday] vs hotspot/store [new unique clients daily]) = mixed school environment, mostly same clients everday
    3) amount of clients at peak hours = 40K concurrent, 65K unique daily
    4) number of campus/buildings/floor plans in VisualRF = 1/133/250
    5) is there plan for network growth?  = Yes, but not significant

     

    I found the Airwave Server Sizing Calulator on the solution exchange and it gave me:

    Your AirWave server recommended sizing:

    * Recommended CPU PassMark score is 31300

    * Recommended memory is 147GB
    * Recommended disk IOPS capacity is 8715
    * Recommended disk space is 1791GB

    This recommendation was altered by the following conditions:

    * High clients per AP is true. All requirements doubled.
    * High VisualRF demand is true. Increased memory requirements.
    * AMON environment is true. Increased CPU and memory requirements.
    * High SNMP trap environment is true. Increased disk IOPS and CPU requirements.
    * High mobility environment is true. Increased disk IOPS and storage requirements.

     

    I prefer blade servers over rack servers, but I'll go either way.  I can meet the recommendations with a newer BL460 blade server with two 12-core processors.  For storage, I'd like to map the server to our Nimble array which will give me ~70K IOPS but I haven't configured CentOs for iSCSI before.  But if I could, then I could snapshot the volume for backups instead of the pulling backups off to an FTP server.  However, this adds complexity and if the better way to go would be a 2U rack mount server filled with SSD/15K drives I'm OK with that too.

     

    We've been told we're too small to consider two servers with a master console, and that there are some major architecture changes coming to Airwave this year.

     

    So my question is, which server form factor would you choose, and what storage strategy would you employ?  Is it possible to have Airwave running very responsively, heat maps coming up quickly, and search results being snappy?  I've been fighting Airwave performance since 2010 and am feeling beaten down.

     

    Thanks.

    Steve



  • 2.  RE: Replacement server for Airwave

    Posted Aug 01, 2016 07:46 PM

    I too have had some performace issues over the years.

    I'm running much leaner than you are, so your mileage may vary, but I've been very happy with running our server in the VMWare environment - it allows me to snapshot before major changes, and I can reconfigure RAM or CPU to test bottlenecking.

     

    I'm lucky to have a skilled server team behind me with the storage and hypervisor stuff prety well nailed down.

     

    If you don't have that kind of infrastructure, I'd suggest the blade approach, and CentOS iSCSI - which I've been able to set up in lab with no issues, but I can't speak to production on that part.