Monitoring, Management & Location Tracking

 View Only
last person joined: one year ago 

Articles relating to existing and legacy HPE Aruba Networking products and solutions including AirWave, Meridian Apps, ALE, Central / HPE Aruba Networking Central, and UXI / HPE Aruba Networking User Experience Insight

Tips for improving disk I/O performance and scheduling :- 

Jul 01, 2014 05:42 AM

[NOTE: See below for a copy of this article in PDF format.]

Beginning with AirWave 7.0, support for polling of switches was expanded and we overhauled the way user data is stored. These changes have had a significant impact on disk I/O on some servers.

In AirWave 7.1 we made several improvements to increase the efficiency of disk I/O. Outside of that, there are several areas under your control where disk I/O can be improved.

CHECKING I/0 PERFORMANCE

On the System > Performance page there are three graphs that relate to disk performance:

- System Disk IOPs
- System Disk Outstanding I/O Requests
- System Disk Utilization

If disk utilization is frequently approaching 100% there are some things you can do to reduce the load on the hard drive system.

DISK DRIVE SPINDLE SPEED

The AirWave server is I/O intensive. For this reason we always require drives with 15,000 RPM spindle speeds. And on systems monitoring more than 500 devices we require multiple drives configured in a RAID-10 array. Please refer to the AWMS Hardware Sizing Guide on the Support Downloads page for details:

http://www.airwave.com/download

To see how to check the spindle speed of your server's drives follow the link below to a Knowledge Base article on the topic:
http://kb.airwave.com/?sid=50140000000aF3U

To verify the configuration of your hard drives, run the following command from the AirWave command line:

# df -h

If the total size of all partitions does not equal the combined capacity of the drives on the system then your drives may not be configured correctly.

INTERFACE POLLING

If the server is monitoring a large number of switches, interface polling periods can be increased. You can find these in the Routers and Switches section on the Groups > Basic page for group(s) containing routers and switches. Disk I/O may improve if you back off the polling periods. The default periods are as follows:

Interface Up/Down Polling Period: 10 minutes
Interface Bandwidth Polling Period: 15 minutes
Interface Error Counter Polling Period: 30 minutes

Consider also disabling the following if they are enabled:

Poll 802.3 error counters
Poll Cisco interface error counters

CDP-RELATED POLLING

If your network contains no Cisco IOS APs, or if it does and you've completed discovery of these devices, you should consider disabling CDP-related polling. On the Groups > Basic page under the SNMP Polling Periods section, disable the "CDP Neighbor Data Polling Period", and under the Routers and Switches section, disable "Read CDP Table for Device Discovery".

I/O SCHEDULING

Another way you might be able to improve disk I/O performance is to change the I/O scheduling algorithm that's being used. Both the OS and the disk controller are capable of I/O scheduling. But when multiple disks are combined logically (via striping or a RAID setup) the OS cannot distinguish one physical disk in the stripe from another, it sees only one disk. The controller on the other hand is still aware of the individual disks and can potentially organize I/O more efficiently.

By default the OS is set up to use the Completely Fair Queuing or CFQ algorithm. By selecting the NOOP algorithm --which simply inserts all I/O requests into a FIFO queue, essentially disabling scheduling by the OS-- scheduling is deferred to the disk controller.

(For more information on the various scheduling algorithms refer to the links at the bottom of this article.)

CHANGING THE I/O SCHEDULING ALGORITHM

First check to see which I/O scheduling algorithm is currently selected:

# cat /sys/block/<BLOCK_DEVICE>/queue/scheduler

Ex.

# cat /sys/block/sda/queue/schedule

(For more informaton on BLOCK_DEVICE designations refer to the end of this article.)


The output will look something like this:

noop anticipatory deadline [cfq]

The algorithm in brackets [] is the one currently selected.


Change the algorithm to NOOP:

# echo noop > /sys/block/<BLOCK_DEVICE>/queue/scheduler

Ex.

# echo noop > /sys/block/sda/queue/scheduler

[noop] anticipatory deadline cfq

That's it.
-----------------------------------------
ADDITIONAL INFORMATION

For more information on I/O scheduling:

http://en.wikipedia.org/wiki/I/O_scheduling
http://en.wikipedia.org/wiki/CFQ
http://en.wikipedia.org/wiki/Category:Disk_scheduling_algorithms


The following describes the BLOCK_DEVICE designations:

sd/x/n
hd/x/n

Ex: sda1

sd : SCSI, SAS, SATA
hd : IDE

x- drive designation

sda : primary master
sdb : primary slave
sdc : secondary master
sdd : secondary slave

n- partition number

Ex: sda1 - first partition of the primary master

Primary partitions are generally assigned numbers 1-4.
Logical drives are generally assigned numbers 5-n.

sd can also be used for kernel-level emulation of SCSI devices --ex. USB devices, CD-RW drives.

In linux, a laptop hard drive is registered under the /dev/sda device file in the linux 2.6 kernel. This is just the file that linux uses to communicate with the drive. It makes no difference how you use linux, you just have to type /dev/sda instead of /dev/hda.



Statistics
0 Favorited
0 Views
1 Files
0 Shares
0 Downloads
Attachment(s)
pdf file
00003328-1.pdf   238 KB   1 version
Uploaded - Dec 23, 2021

Related Entries and Links

No Related Resource entered.