: Tip: fwdReforwardableStatus() I think is the function which implements
: the behaviour you're seeing. That and fwdCheckRetry.
My C "Fu" isn't strong enough for me to feel confident that I would even
know what to look for if I started digging into the code ... I mainly just
wanted to clarify that:
a) this is expected behavior
b) there isn't a(n existing) config option available to change this behavior
: You could set the HTTP Gateway timeout to return 0 so the request
: isn't forwarded and see if that works, or the n_tries check in fwdCheckRetry().
I'm not sure I understand ... are you saying there is a squid option
to set an explicit gateway timeout value? (such that origin requests which
take longer then X cause squid to return a 504 to the client) ... This
would ideal -- the only reason I was even experimenting with read_timeout
was because I haven't found any documentation of anything like this. (but
since the servers I'm dealing with don't write anything until the entire
response is ready I figured I could make do with the read_timeout)
: I could easily make the 10 retry count a configurable parameter.
That might be prudent. It seems like strange behavior to have hardcoded
in squid.
: The feature, IIRC, was to work around transient network issues which
: would bring up error pages in a traditional forward-proxying setup.
But in situations like that, wouldn't the normal behavior of a long
read_timeout (I believe the default is 15 minutes) be sufficient?
: Hm, what about "retry_on_error" ? Does that do anything in an accelerator
: setup?
It might do something, but I'm not sure what :) ... even when i set it
explicitly to "off" squid still retries when the read_timeout is exceeded.
Perhaps I'm approaching things the wrong way -- I set out with some
specific goals in mind, did some experimenting with various options to try
and reach that goal, and then asked questions when i encountered behavior I
couldn't explain. Let me back up and describe my goals, and perhaps
someone can offer some insight into the appropriate way to achieve
them....
I'm the middle man between origin servers which respond to every request
by dynamicly generating (relatively small) responses; and clients that
make GET requests to these servers but are only willing to wait around
for a short amount of time (on the order of 100s of milliseconds) to get
the responses before they abort the connection. The clients would rather
get no response (or an error) then wait around for a "long" time -- the
servers meanwhile would rather the clients got stale responses then no
responses (or error responses). My goal, using squid as an accelerator,
is to maximize the "satisfaction" of both the clients and the servers.
In the event that a request is not in the cache at all, and an origin
server takes too long to send a response, using the "quick_abort 0" option
in squid does exactly what I hoped it would: squid continues to wait
around for the response so that it is available in the cache for future
requests.
In the event that stale content is already in the cache, and the origin
server is "down" and won't accept any connections, squid does what I'd
hoped it would: returns the stale content even though it can't be
validated (albeit, without a proper warning, see bug#2119)
The problem I'm running into is figuring out a way to get the analogous
behavior when the origin server is "up" but taking "too long" to respond
to the validation requests. Ideally (in my mind) squid would have a
"force_stale_response_after XX milliseconds" option, such that if squid
has a stale response available in the cache, it will return immediately
once XX milliseconds have elapsed since the client connected. Any in
progress validation requests would still be completed/cached if they met
the conditions of the "quick_abort" option just as if the client had
aborted the connection without receiving any response.
Is there a way to get behavior like this (or close to it) from squid?
"read_timeout" was the only option I could find that seemed to relate to
how long squid would wait for an origin server once connected -- but it
has the retry problems previously discussed. Even if it didn't retry, and
returned the stale content as soon as the read_timeout was exceeded,
I'm guessing it wouldn't wait for the "fresh" response from the origin
server to cache it for future requests.
FWIW: The "refresh_stale_hit" option seemed like a promising mechanism for
ensuring that when concurrent requests come in, all but one would get
a stale response while waiting for a fresh response to be cached (which
could help minimize the number of clients that "give up" while waiting
for a fresh response) -- but it doesn't seem to work as advertised (see
bug#2126).
-Hoss
Received on Mon Nov 26 2007 - 15:48:10 MST
This archive was generated by hypermail pre-2.1.9 : Sat Dec 01 2007 - 12:00:02 MST