On 23/07/11 00:09, Ragheb Rustom wrote:
> Dear Markus and Amos,
>
> I have done the changes you have proposed. I have dropped the max-size on COSS partition to 100KB so the COSS cache_dir line now reads as follows:
>
> cache_dir coss /cache3/coss1 110000 max-size=102400 max-stripe-waste=32768 block-size=8192 membufs=100
> cache_dir aufs /cache1 115000 16 256 min-size=102401
> cache_dir aufs /cache2 115000 16 256 min-size=102401
> cache_dir aufs /cache4/cache1 240000 16 256 min-size=102401
>
> After doing this I have noticed the following warnings every now and then (usually every 1 - 2 hours) in the cache.log file
>
> squidaio_queue_request: WARNING - Queue congestion
>
> What I also noticed using iostat is that the big HDD with AUFS dir is handling a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid disk read and write access be somewhat equal between the 3 AUFS partitions? Do you think I should go for a higher max-size on the COSS partition to relieve the extra IO from the big AUFS cache_dir?
>
The default selection picks the directory with most available space. So
for the first 130GB of unique cacheable objects that would be cache4.
http://www.squid-cache.org/Doc/config/store_dir_select_algorithm/
You can set that to "round-robin" to level the writes more evenly over
the AUFS disks. Wont be perfectly even balancing due to differences in
object size an a few other factors.
"Queue congestion" is likely a result of everything big going to cache4
initially.
http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion
Amos
-- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9Received on Sat Jul 23 2011 - 04:25:00 MDT
This archive was generated by hypermail 2.2.0 : Sat Jul 23 2011 - 12:00:03 MDT