Hiya,
How will the new patch work in regards to squid re-using disk files.
It would seem to me that irregardless of how the disk files were allocated
ORIGINALLY over the directory set, after about 7 days of operation after
completely filling up the disk space, that by the random creation and
deletion of files through normal cachge activity that the base case would
still be the same.
What I'm saying is that this patch will win you lots up to the time
of when the cache fills. After that it will still win you because of the
principle of locality meaning that commonly accessed files will have been
pulled down first and so will initially reside in the first set of directories.
However, after prolonged creation and deletion of disk files as objects
start to expire and get pushed out and regotten that disk performance would
slowly start to degrade back to the base level of performance originally
seen.
Please let us know how your disk accesses look after 2 weeks AFTER
the disk has filled up again after you blew it away. I'm willing to bet
that disk performance will gradually degrade. I am prepared to admit that
I may be wrong.
Note that my argument applies to the later versions of squid that
keep a stack of recently expired disk objects and re-use those same disk
files by writing over the top of the old ones. If you are still using the
same ever-increasing swap file number allocation system then your patch
WILL provide a sustained performance win.
I would seem to me that even better disk performance could be had
by analysing the frequency of hits against various objects and their
corresponding sizes, and then provide some form of dynamic disk reorganisation
and keep like performance objects together. This would mean better caching
by the OS of frequently accessed directories.
Stew.
Received on Mon Nov 17 1997 - 14:42:55 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:39 MST