I am running Squid 2.2STABLE2 on a Solaris 2.4 box. It is the parent
to a Squid at another site, connected over a slow VPN link. The link
passes only TCP, so the Squids do not do any ICP. When the far site
requests a certain CGI script, the script output is passed without
problems from a local server to the local Squid to the remote Squid to
the browser. However, that script invokes a javascript file with
<script src="/js/commonf.js" language="JavaScript">. The request for
that file goes from the browser to the remote Squid, but the local
Squid drops the ball. The request arrives. I can see it on a
sniffer, and, if I run with debug_options at "50,5 33,5" I see
these messages in the log, just as with a properly received request:
1999/05/21 17:10:56| httpAccept: FD 14: accepted
1999/05/21 17:10:56| comm_add_close_handler: FD 14, handler=28870, data=4c31d8
1999/05/21 17:10:56| commSetTimeout: FD 14 timeout 20
1999/05/21 17:10:56| commSetSelect: FD 14 type 1
1999/05/21 17:10:56| comm_accept: FD 4: (11) Resource temporarily unavailable
But I do NOT see the following, which is supposed to happen next:
1999/05/21 17:10:56| comm_poll: 1 FDs ready
1999/05/21 17:10:56| clientReadRequest: FD 14: reading request...
Instead, after request_timeout elapses, I get:
1999/05/21 17:11:16| checkTimeouts: FD 14: Call timeout handler
1999/05/21 17:11:16| requestTimeout: FD 14: lifetime is expired.
and then, a minute or so later, the browser gets a "lifetime expired"
message.
Increasing request_timeout does not help. Any ideas?
thanks,
Steve Gaarder Network and Systems Administrator
gaarder@cmold.com C-MOLD, Ithaca, N.Y., USA
Received on Mon May 24 1999 - 15:10:03 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:46:25 MST