Exactly, Henrik. I also considered all of these options...
Not using the entire disk due to journal/inodes, noatime, changing the
elevators.. But none of these resulted on a good result. Just brought the
performance from insuportable to suportable, but far from good.
On my research on the net, I've read that this kind of flush is to
'prevent' disk fragmentation; assembling a large contiguous block on memory
then writing this stripe on the disk.
Monitoring the iostat (using any elevator; cfq, noop, anticipatory,
etc.), under Reiser4 or EXT3, I've noticed that is very rare a disk write. I
have a lot of read, read, read, read, read and then the hated flush.
Sometimes, gone by a second, other times, a looooong flush.
I'ts amazing how things changed when tried XFS. Now I have CONCURRENT
r/w.
Previously, I would never get this iostat line:
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
rd/c0d8 2.99 11.97 27.93 48 112
The disk was EITHER reading or writing; never a r/w op.
A interesting thing to add to your knowledge base :-)
Best regards,
Rodrigo.
----- Original Message -----
From: "Henrik Nordstrom" <henrik@henriknordstrom.net>
To: "Rodrigo A B Freire" <zazgyn@terra.com.br>
Cc: <squid-users@squid-cache.org>
Sent: Wednesday, March 29, 2006 7:11 PM
Subject: Re: [squid-users] WARNING - Queue congestion
From what I have understood of ext3 the above happens if the journal
gets full.. Having the fs mounted with noatime helps somewhat in
reducing this as there is much less metadata updates.
It is possible that the journal mode data=writeback could help this as
well, especially if combined with suitable elevator tunings..
Also commit=1 would probably help in making the system run smoother
under load. And should also reduce the demand on the journal size..
Regards
Henrik
Received on Wed Mar 29 2006 - 15:23:41 MST
This archive was generated by hypermail pre-2.1.9 : Sat Apr 01 2006 - 12:00:04 MST