Hi,
our squid system (according to our munin graphs), is suffering rather from
high iowait.  I'm also seeing warnings of disk i/o overloading.
I'm interested to understand how this disk load scales.  I know more disks
(we only have a single cache disk just now) would be a big help.  One
question I have is how (and if) the disk load scales with the size of the
cache.
I'll present a ludicrously simplistic description of how disk load might
scale (purely as a starting point) and see could people point out where I'm
wrong.
The job a single disk running a cache must do in some time step might be:
   disk_work = (write_cached_data) + (cache_replacement_policy) + (read_hits)
where:
   (write_cached_data) =~ x * (amount_downloaded)
   (cache_replacement_policy) = (remove_expired_data) + (LRU,LFUDA,...)
   (read_hits) =~ byte_hit_rate
   (LRU,LFUDA,...) =~ amount of space needed =~ x * (amount_downloaded)
   (remove_expired_data) =~ (amount_downloaded) over previous time
so
  disk_work = f(amount_downloaded,byte_hit_rate,cache_replacement_policy)
To me this speculative analaysis suggests that the load on the disk is a
function of the byte_hit_rate and the amount being downloaded, but not of
the absolute cache size.
So, decreasing the cache_dir size might lower the disk load, but only as it
lowers the byte_hit_rate (and possibly the seek time on the disk I guess).
Is there something wrong in this?
Gavin
Received on Thu Apr 02 2009 - 15:08:49 MDT
This archive was generated by hypermail 2.2.0 : Fri Apr 03 2009 - 12:00:01 MDT