System Performance under heavy I/O load

Joseph Salisbury josephtsalisbury at gmail.com
Mon Jun 21 14:32:51 UTC 2010


On Mon, Jun 21, 2010 at 8:04 AM, Dr. Nils Jungclaus <
Nils.Jungclaus at perfact.de> wrote:

>  Hi,
>
> I am using 8.04 on several (well equipped) servers and experience the
> following problem on all of them:
>
> When doing larger I/O jobs like backup, I always get a very poor
> interactive response of the system. Interactive in this case means
> performance of database requests, web application requests and even
> interactive tools like top. The usual setup looks like this:
>
> - postgres DB as database backend
> - apache as loadbalancer and certificate handler
> - several parallel zope instances using zeo
> - sometimes more things like vmware-server, samba, postfix
>
> When I start a backup (via network using rsync, local to another HD using
> rsync, or using a USB attached external drive), I get lots of delayed
> processes in top (D), the iowait percentage goes up to 10 to 20 percent, but
> the throughput (watched via iostat) is not very high, at least far away from
> the rates I get using only one device. The load goes up to 20 or 30, and
> nothing really gets done by the system. It seams to me that the system is
> standing on it's own feet.
>
> I already tried the following:
>
> - using deadline/cfq scheduler (cfq using ionice for backup processes,
> gives the best results for me, but is still far away from hardware
> capabilities)
> - on USB devices, I tried different settings for
> /sys/block/*/device/max_sectors
>
> The hardware is a 24 core (4x6) Opteron, Adaptec Raid with Raid 10 (getting
> up to 500MB/s read performance) and 64GB Ram.
> Several other servers (16, 8 Cores, 32/16GB ram, Dell perc6i Raid) behave
> similar.
>
> Are there any hints on getting better I/O performance / better response
> times on such machines?
>
> In my opinion, the kernel should be able to schedule the ressources in a
> way that at least any of the hardware components is the restricting factor.
> What I see is a more or less idle system, high load, high iowait percentage,
> no throughput.
>
> Any hints welcome!
>
>     Nils
>
>
> --
> ubuntu-server mailing list
> ubuntu-server at lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
> More info: https://wiki.ubuntu.com/ServerTeam
>

Hello Nils,

Have you tried experimenting with the readahead settings?

By default Linux requests the next 256 sectors when doing a read. In a very
sequential environment(Like backups), increasing this value can improve read
performance.

You can set the read-ahead on an sd device by using the "*blockdev*"
command. This tells the SCSI layer to read X sectors ahead. This is only
valuable with sequential I/O-type applications, and can cause performance
problems with high random I/O, so check the performance of your other
workloads after making changes.


Syntax:

*blockdev –setra X <device name> *i.e.

#*blockdev –setra 4096 /dev/sda
*


(Note: 4096 is just an example value. You will have to do testing to
determine the optimal value for your system). The OS will read-ahead X
pages, and throughput may be higher.

To check the existing read ahead setting use:

#*blockdev –getra <device name>*


Also, have you looked at the vmstat statistics in addition to iostat?  You
may want to compare the size of your I/Os between the workloads.  Maybe you
are performing much smaller I/Os when this problem happens?


Hope this helps,


Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-server/attachments/20100621/473992ac/attachment.html>


More information about the ubuntu-server mailing list