On Wed, 11 Sep 1996, Nelson Posse Lago wrote:
> Well, I expect to grow :-). I could use Spinner (now known as roxen)
> as a proxy cache, but squid is *sooooo* superior... besides, squid has
> the hability to talk to parents and neighbors, and this is important here
> in Brazil, were international links are somewhat overloaded and Internet
> is growing *very* fast, faster than the infrastructure grows.
I operated the CERN 3.0 cache through a Squid parent for about 4 months.
It's OK for a small cache, but gets slow as the cache size builds up. Of
course it doesn't allow neighbours, but parenting is so much faster for me
anyway (as long as my parent is willing). I run over a 28.8K link and the
ICP pings can take a loooong time. So I just get the parent to fetch the
document and skip ICP pings.
> I sure could do that. But I'm not having performance problems (at
> least not yet), I was just wondering if there's a memory leak around. The
> first few days, squid was taking a few meg of RAM (around 2-4, don't
> remember). Then I checked back, it was taking 10Mb and stayed that way
> for a few days. Now it's taking 16Mb. I don't see why. If every cached
> object takes 80 bytes of VM and I currently have 45Mb on disk, that's
> about 3000 objects (taking about 15K per object). This would yell a
> ridiculous 240Kb memory + 1Mb of VM for hot objects + code size. Not to
> mention that the cache size didn't grow after the first few days. I think
> it should fit in 5Mb. Am I wrong? (maybe it's time to kill the guy...)
I have found exactly the same thing as you. When I start squid it takes
3.4 Meg. After a few days it climbs to 8 megs. I don't think the problem
is a memory leak as the cachemgr.cgi program reports that 8 Meg is
accounted for. I have 0 documents in the VM hot object cache. It just
seems that squid maintains a pool of resources that it claims as needed
(when it's busy maybe) and doesn't give back when it's finished. Hopefully
it will continue to reuse this pool and it won't grow beyond a size
determined by how busy it gets (or how many objects it has).
I would record the output of cachemgr.cgi (Memory Usage) for the different
sizes you get and try to locate what pools are growing. It may give a hint
as to why it grows in stages. I haven't done this yet as I don't think
it's a problem.
Cheers.
-- +--------------------------------------------------------------+
. | John Saunders - John.Saunders@scitec.com.au (Work) |
,--_|\ | - john@nlc.net.au (Home) |
/ Oz \ | - http://www.nlc.net.au/~john/ |
\_,--\_/ | SCITEC LIMITED - Phone +61 2 9428 9563 - Fax +61 2 9428 9933 |
v | "Alcatraz triathalon: dig, swim, run" |
+--------------------------------------------------------------+
Received on Wed Sep 11 1996 - 00:18:57 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:59 MST