Security

 View Only
last person joined: 22 hours ago 

Forum to discuss Enterprise security using HPE Aruba Networking NAC solutions (ClearPass), Introspect, VIA, 360 Security Exchange, Extensions, and Policy Enforcement Firewall (PEF).
Expand all | Collapse all

ClearPass Disk Performance

This thread has been viewed 13 times
  • 1.  ClearPass Disk Performance

    Posted Dec 11, 2019 07:34 AM

    Wake up Aruba & HPE developers and tell us the truth !!!

     

    I read a lot of topics about problems with ClearPass disk throughtput & performance. Still there is no sensibly resolve so I decided to do deep analyse myself.

     

    For tests, i oversized our ClearPass 25K VM appliace:

     

    32 cores (2x XEON E5-2660v2 - 40 cores on host)

    96 GB RAM (128GB on host)

    Local storage: 8x HPE SAS2 SSD or 8x HP 10K SAS2 HDD

    Network storage: 3PAR with 24x 15K SAS2 HDD

     

    Tested in multiple scenarios on KVM and Hyper-V, with RAID 6 or RAID 10. All test was on dedicated HW with only CleaPass VM.

     

    Even with SSD storage it's not possible to pass ClearPass benchmark (show system-resources). That is what i don't undestand, on this SSD storage i can achieve over 3500 MB/s sequential read troghtput, but in ClearPass i have problem get over 700 MB/s. Why is that ? 

     

    The problem is in that benchmark - diskio.py It tests disk with sequential read/write 1k blocks:

     

    dd if=/dev/zero of=/tmp/io_test bs=1k count=256k conv=fdatasync

     

    but working with 1k blocks is very slow, this test is bad even on another systems with NVMe disks. Is there any reason why is ClearPass benchmark that hard ? ClearPass filesystem have 4k blocks.

     

    Another thing is that ClearPass does not support virtio_scsi drivers on KVM. TAC telled me to use IDE because developer team telled them that ClearPass have problems with virtio_scsi - That is absolutelly unacceptable, IDE is old and slow.

     

    Can You please tell me why you doiing and selling this bad thing ?

    I usually do a lot of test before i let my product go to world.

     

     

     

     

     

     

     



  • 2.  RE: ClearPass Disk Performance

    EMPLOYEE
    Posted Dec 11, 2019 10:46 AM
    Just for clarification, are you actually having performance issues with your CPPM environment or are these comments based solely on the system test framework?

    You mentioned for one of the issues you asked TAC. Is there a TAC case for all of this?


  • 3.  RE: ClearPass Disk Performance

    Posted Dec 11, 2019 01:57 PM

    We have bit performance issues since installation last year, but now its escalating because our network grows. From time to time, ClearPass stops pushing data to external server via Generic HTTP method. Queriing endpoints or sessions in WUI is more slower than early (from 20 seconds to minutes). Once a day in night we download all session from previous day, it tooks over 3 hours  to complete (filtered about 70K session from 1 million)

     

    I already tried to manipulate with configuration numbers of connection limits, cache, housekeeping, etc. Nothing helps.

     

    Last TAC I created half year ago was solved by official bug CP-33018 (to use IDE instead of SCSI)

     

    I considering if I create another TAC, most of my recent cases were just a waste of time.

     



  • 4.  RE: ClearPass Disk Performance

    EMPLOYEE
    Posted Dec 12, 2019 09:18 AM

    Please open a TAC case, and share the numbers of your recent cases that you had bad experience with (in an Airheads personal message).

     

    If your deployment is sized according to the Sizing and Ordering Guide (here), and your servers are sized according the Installing ClearPass on a VM Technote, there should not be performance issues.

     

    By far the most of cases that are related to performance come down to not following these two guides, or issues in the actual deployed hardware that does not benchmark to what was projected, for example if the VM is placed on a shared server where the remaining capacity is not what was assumed. And even then, there should be some margin before you actually experience this in the performance.

     

    From you detailed research, I see that you at least scaled the appliance (beyond) to the specifications. If you checked the scaling&sizing and believe your application is within those specifications as well, please open that TAC case and have support find out why there still is a performance issue. That might be the disk, driver but may be something else as well. As far as I know is a C3000V on KVM tested for every release, also under load, so your experience does not match quality testing and TAC can find out what the differences are.

     

    As always, if you feel at any point in time that TAC does not respond in either timely or technical level of understanding, please have the case escalated and it will be reviewed, placed in the right team and assigned the appropriate priorities.