I just figured that the 'cache is running out of filedescriptors' can be
avoided
by tuning pconn_timeout, read_timeout, request_timeout and quick_abort.
Henrik's message answered my question about cache.log message:
> 1999/09/24 15:26:00| sslReadClient: FD 21: read failure: (104) Connection
> reset by peer
>This generally means that the client aborted the SSL connection while
>Squid was sending data to it.
Since I am doing BenchMark, the number of transactions/min is very important
to me. When client aborted the SSL connection while Squid was sending data
to it, is it still a successful transaction? By 'successful', I mean user
get the page he asks for?
Thanks a lot,
Jenny
> -----Original Message-----
> From: Zhang, Jenny (jennyz)
> Sent: Thursday, January 13, 2000 12:03 PM
> To: 'squid-users@ircache.net'
> Subject: max-age and running out of filedescriptors
>
> Hi,
>
> I am using Squid v2.2STABLE4 on RedHat Linux. I have two questions:
>
> 1. In my web server reply, I have HTTP header 'Cache-Control: max-age=30'.
> None of the objects is cached as long as the max-age<= 60 seconds. If
> I increase max-age to 61, it worked fine. The system time is in sync. Does
> this mean Squid can not cache object for less than 1 minute? Or I need
> to try other cache control header?
>
> 2. I have 500 users sending request to Squid. After 10 minutes running,
> I got the message 'WARNING! Your cache is running out of filedescriptors'
> and 'sslReadClient: FD 686: read failure: (104) Connection reset by peer'.
> And user got 'can not connect to server' error. What are the possible
> reasons
> for that?
>
> Thanks in advance.
>
> Jenny Zhang
Received on Thu Jan 13 2000 - 18:06:58 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:50:22 MST