A very good question. Not sure there is a simple answer.
To get a firm answer you need to do some statistical analysis to find
out if there is bytes which could have been cached by Squid but was not.
Then identify these bytes and figure out why they was not cache hits.
Please note that if you have a couple of users doing a low of
downloading of large files, these will very quickly push down the byte
hit ratio without affecting the request hit ratio much (a few requests,
accounting for very many bytes, which a very low probability for cache
hits).
As a first analyze, exclude all requests larger than 1 MB from your
logs, and then recalculate the hit ratios. You should then see a
significant increase in byte hit ratio I think.
Then do a analyzis of the objects you excluded to see if there is any
missed opportunities for cache hits.
Regards
Henrik Nordström
Squid Hacker
khiz code wrote:
>
> Hi all
> i know that this problem has been discussed at length onthe list but still
> i am achieving Hit ratios in the range 35 - 60% but the byte hit ratio tends to
> be between 5 to 15%
> which areas should i look out for a potential problem
> i m using the default refresh patterns so far .and lru replacement policy
> maximum_object size is 80 MB!!
> cache dir size is 12 GB
Received on Sat Oct 27 2001 - 04:13:44 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:03:10 MST