Robert,
I was asking because one of our squid accelerated cache servers has 4GB of
RAM in it and we saw a scenario in which it looks like squid restarted
suspiciously close to 2048MB in process size.
The main reason we don't suspect this to be the actual case is because all 3
of the cache servers appear to have restarted at or about the same time; so
we suspected a DoS attempt.
Since we noticed this suspicious restart on the biggest cache server first,
I thought I'd find out for sure if there could be a problem there. We've
set back our cache_mem to ~550MBs so we don't expect to exceed 2GB
again...we're just leaving ~2GB of ram unused by doing so.
These are running on 32bit Intel processors and we are running linux.
So, just to get it from the 'horses mouth' per se, we should not exceed 2GB
process sizes with squid at this point in time?
Thanks,
Andrew
-----Original Message-----
From: Robert Collins [mailto:robertc@squid-cache.org]
Sent: Wednesday, November 20, 2002 6:50 PM
To: Andrew Sawyers
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Memory Limitations?
On Wed, 2002-11-20 at 05:26, Andrew Sawyers wrote:
> I can't seem to find if there are any know issues with Squid using more
then
> 2GB of RAM. Are there any known issues in squid with the process using
over
> 2GB of RAM?
Are you encountering this scenario, or is it a hypothetical question?
Also, are you using a 32 or 64-bit address-space machine?
There are no *design* issues in squid for 2GB+ memory, but the physical
architecture does limit us. We could look at some of the Intel 32-bit
segment+offset concepts if linux supports them(you use linux/intel
right?). I haven't looked into linux support for that at this point
However, that would be quite some effort, and if this is hypothetical,
best left for a rainy day :}.
Rob
Received on Thu Nov 21 2002 - 09:23:32 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:11:20 MST