Wake up Aruba & HPE developers and tell us the truth !!!
I read a lot of topics about problems with ClearPass disk throughtput & performance. Still there is no sensibly resolve so I decided to do deep analyse myself.
For tests, i oversized our ClearPass 25K VM appliace:
32 cores (2x XEON E5-2660v2 - 40 cores on host)
96 GB RAM (128GB on host)
Local storage: 8x HPE SAS2 SSD or 8x HP 10K SAS2 HDD
Network storage: 3PAR with 24x 15K SAS2 HDD
Tested in multiple scenarios on KVM and Hyper-V, with RAID 6 or RAID 10. All test was on dedicated HW with only CleaPass VM.
Even with SSD storage it's not possible to pass ClearPass benchmark (show system-resources). That is what i don't undestand, on this SSD storage i can achieve over 3500 MB/s sequential read troghtput, but in ClearPass i have problem get over 700 MB/s. Why is that ?
The problem is in that benchmark - diskio.py It tests disk with sequential read/write 1k blocks:
dd if=/dev/zero of=/tmp/io_test bs=1k count=256k conv=fdatasync
but working with 1k blocks is very slow, this test is bad even on another systems with NVMe disks. Is there any reason why is ClearPass benchmark that hard ? ClearPass filesystem have 4k blocks.
Another thing is that ClearPass does not support virtio_scsi drivers on KVM. TAC telled me to use IDE because developer team telled them that ClearPass have problems with virtio_scsi - That is absolutelly unacceptable, IDE is old and slow.
Can You please tell me why you doiing and selling this bad thing ?
I usually do a lot of test before i let my product go to world.