Il 10/04/2013 17:22, Mr Dash Four ha scritto:
>
>> As an addition to my previous comment, please read this comment in
>> squid.conf (3.1.6 FWIW):
>>
>> # TAG: store_avg_object_size (kbytes)
>> # Average object size, used to estimate number of objects your
>> # cache can hold.
> I don't see how this helps fixing the problem I described in my initial
> post.
I cited that parameter just to confirm that the number of objects held
by the cache at any given moment depends on their size.
I didn't want to suggest that it could solve your problem, but I didn't
make my intention clear. Besides, I think that number is used only to
estimate the number of on-disk cached objects, so perhaps we could
forget about it for the sake of this discussion.
My fault.
>
> In other words, if squid memory management was working correctly, it
> would try to take the store_avg_object_size and/or
> maximum_object_size_in_memory parameters to determine the number of
> objects it can store in memory, *and*, at the very least, try to
> periodically check whether the objects placed in memory do not exceed
> the threshold indicated in the cache_mem parameter. If that happens,
> obviously, no additional objects should be placed in memory and squid
> should start "demoting" existing objects in ram and put them back in the
> disk storage cache until the cache_mem threshold is met.
>
As I wrote before, I think squid is _designed_ to do just that, but
memory leaks in certain versions made it appear like it doesn't.
> That, it seems, does not happen and squid is trying to squeeze as much
> out of my available ram as possible.
It's unclear to me, though, how you can be sure that the memory
consumption growth is caused by squid ignoring the cache_mem threshold
instead of e.g. missing free() calls on some of its internal data
structures.
-- Marcello RomaniReceived on Thu Apr 11 2013 - 06:32:57 MDT
This archive was generated by hypermail 2.2.0 : Thu Apr 11 2013 - 12:00:03 MDT