Matus UHLAR - fantomas wrote:
> On 13.10 13:45, Brennon Church wrote:
>
>>Has anyone here come across problems when using squid to cache larger
>>files? I know about the upper limit on the size of objects when using
>>Squid 2.x, and I have the maximum_object_size set to 1048576KB (1G), so
>>that shouldn't be a concern. I'm coming across two problems:
>
>
> maximum size with squid 2.5 is 2GB-1B. However I'm not sure how can you
> set such value, except using 2147483647 or 2097151 KB
maximum_object_size is already set to 1048576KB (1GB), within the limits
of Squid. As far as I know, that setting is the only way to up the limit.
>
>
>>2) After those times where a larger file (again, 600M or so) succeeds,
>>the object is there, and I'm able to download the file again from the
>>cache rather than directly from the site. Shortly afterwards, however,
>>the object is overwritten by something else. I am using the ufs cache
>>type, and it's been given 10 Gigs of space, plenty for the tests I'm
>>running. I've also upped the sub directories to 256 and 256, so there
>>should be plenty of object "placeholders" available. In fcct, when I
>>look in the cache directories only the first few subdirectories within
>>the first 00 directory are being used.
>
>
> It's probably because your cache is filled up and the file has 'expired'
Nope. The cache has plenty of space. I've got it set to allow up to 10
gigs, and I'm the only one using it, and it's nowhere near 10 gigs.
>
> You probably should increase your cache size, not number of files it can
> store. And using heap LFUDA replacement policy should help you too.
>
I've tried using the standard lru and heap LFUDA. Both have the same
problem.
--Brennon
Received on Wed Oct 13 2004 - 16:38:26 MDT
This archive was generated by hypermail pre-2.1.9 : Mon Nov 01 2004 - 12:00:02 MST