On 05/24/2013 09:09 AM, Amos Jeffries wrote:
> To clarify what we are up to:
Executive summary: I am sorry, but I think you are moving in the wrong
direction with this. You are adding new state and the corresponding
management code that is not necessary or not specific to pipelining. The
unnecessary parts are especially worrisome because the surrounding code
is already full of complex state and complexity has a tendency for
exponential growth (do you recall us struggling with flags.readMore
management?).
I hope the detailed comments below convince you to remove most of the
new code/state. A couple of useful changes can be polished and added,
but are technically outside this patch scope. I will identify them a well.
> @@ -1991,6 +1992,12 @@
> static ClientSocketContext *
> parseHttpRequestAbort(ConnStateData * csd, const char *uri)
> {
> + // if the parser detected header frame problems and cannot even parse it
> + // we cannot pipeline anything following the possibly invalid frame end.
> + csd->pipelineDisable(uri);
As you know, parseHttpRequestAbort() is called on HTTP framing and
similar errors that prevent us from reading more requests even if
pipelining is disabled or not happening. Thus,
ConnStateData::clientParseRequests() should stop looping after
parseHttpRequestAbort() is called, regardless of pipelining.
If ConnStateData::clientParseRequests() stops looping without your
patch, the above changes are not necessary. If it continues to loop
without your patch, the bug is elsewhere and should be fixed elsewhere
(making the above changes unnecessary -- they would just mask the bug in
some cases). I believe we are dealing with the latter case, as discussed
below.
> +
> + // XXX: why is this not using quitAfterError()?
> +
I can answer that. If you look at trunk r11880.1.31 that added
quitAfterError(), you will see that it was added where we were setting
flags.readMore to false. This is how I identified places where
quitAfterError() may have been necessary. parseHttpRequestAbort() does
not set flags.readMore to false, but I think it should.
In summary, there is a bug here. To fix it, parseHttpRequestAbort()
should call quitAfterError() to set flags.readMore to false and, hence,
stop the ConnStateData::clientParseRequests() loop. This fix is not
related to pipelining, but once the bug is fixed, the above pipelining
patch changes become unnecessary.
> @@ -2302,6 +2309,10 @@
> /* Set method_p */
> *method_p = HttpRequestMethod(&hp->buf[hp->req.m_start], &hp->buf[hp->req.m_end]+1);
>
> + /* CONNECT requests block pipelining if allowed. */
> + if (*method_p == Http::METHOD_CONNECT)
> + csd->pipelineBlock("CONNECT request");
> +
clientProcessRequest(), which is called after the above
parseHttpRequest() code, already stops
ConnStateData::clientParseRequests() loop on CONNECT requests by setting
mayUseConnection to true and flags.readMore to false. The above change
is not needed (and is also somewhat misplaced because the change is
about interpreting parsed request and not about parsing).
> @@ -2567,6 +2578,7 @@
> if (request)
> request->flags.proxyKeepalive = false;
> flags.readMore = false;
> + pipelineDisable("will close after error");
> debugs(33,4, HERE << "Will close after error: " << clientConnection);
> }
The above change is not necessary because setting flags.readMore to
false already stops ConnStateData::clientParseRequests() loop.
> @@ -2747,6 +2759,10 @@
> goto finish;
> }
>
> + // if client disabled persistence we should stop pipelining more requests.
> + if (!request->persistent())
> + conn->pipelineDisable("client will close");
> +
This seems useful! The existing code calls clientSetKeepaliveFlag() so
we can use that flag to break the loop after clientProcessRequest()
returns but it may be better to add a dedicated ConnStateData for this.
Perhaps we are already doing something like that elsewhere, but I could
not find a corresponding code path.
Again, this is not limited to pipelining because we should not read new
requests after client Connection:close even if the client does not
pipeline. If we change the code here, it will be a useful optimization
in non-pipelining contexts.
> @@ -2887,6 +2903,7 @@
> // consume header early so that body pipe gets just the body
> connNoteUseOfBuffer(conn, http->req_sz);
> notedUseOfBuffer = true;
> + conn->pipelineBlock("request payload");
The !bodyPipe guard of the clientParseRequests() loop already takes care
of blocking pipelining while reading the current request body. The
bodyPipe member is set right before the above code. This change is not
needed.
> @@ -2933,6 +2950,7 @@
> if (request != NULL && request->flags.resetTcp && Comm::IsConnOpen(conn->clientConnection)) {
> debugs(33, 3, HERE << "Sending TCP RST on " << conn->clientConnection);
> conn->flags.readMore = false;
> + conn->pipelineDisable("Sending TCP RST");
> comm_reset_close(conn->clientConnection);
> }
> }
flags.readMore guard of the clientParseRequests() loop already
terminates the loop and the closed connection will not allow it to
resume later, I hope. This change should not be needed.
There are other [minor] problems with the patch, but let's focus on the
big picture before polishing details.
Thank you,
Alex.
> On 22/05/2013 5:00 a.m., Alex Rousskov wrote:
>> On 05/20/2013 10:59 PM, Amos Jeffries wrote:
> <snip>
>>> I've added a loop to scan for Connection:close (and
>>> Proxy-Connection:close), and CONNECT method as blockers on further
>>> pipieline reads. There may be other conditions we find later.
>> The new loop is problematic, on several levels:
>>
>> 1) We should either
>>
>> * assume (or ensure) that when we are called a connection is
>> persistent
>>
>> or
>>
>> * not assume that when getCurrentContext() is NULL the connection is
>> persistent.
>>
>> I recommend the first approach because it is simpler, does not add new
>> state, and avoids code duplication. This is the approach used before
>> your changes AFAICT.
>
> We are HTTP/1.1 now. We can and do assume the connection is persistent
> unless some signal is available that explicitly means it is closed.
> Before and after the loop was added AFAICT.
Squid's HTTP version is completely irrelevant to this discussion. The
key here is whether connOkToAddRequest() is only called when connection
is persistent. I think we should assume (or ensure) that it is.
>> However, if you disagree, then the
>> "getCurrentContext() is NULL implies that the connection is persistent"
>> assumption above should be removed and the next two items will also
>> apply:
>>
>>
>> 2) We should not implement "connection is persistent after this request"
>> check inside the loop. It should be either a method of
>> ClientSocketContext() or a ConnStateData member, depending on whether
>> you remove the assumption discussed above.
>
> There is a ! operator on that test you may have overlooked. This is the
> check for "Connection:close is present --> no more prefetching.". In all
> other cases we are assuming persistence. That may or may not be right
> but note that this function is only called when there is *already * a
> series of bytes in the read buffer and we are simply deciding whether to
> parse it now (assuming it is a new request header) or wait to find out
> if it actually is a header block.
>
> NP: in the attached patch after the loop removal the same persistence
> check is run right after HttpHeaders are all parsed. Then updates the
> ConnStateData pipeline state to disable any future requests if
> non-persistence is seen.
>
> On the whole I am thinking there is a lot more things we should be
> blocking on which are not tested for yet and real-world traffic exposure
> will require further additions.
> While moving the block/disable/no-change checks into the
> clientProcessRequest() function I have added more tests, this time for
> payload existence to temporarily block the pipeline as well, just like
> CONNECT, and all sorts of "framing" errors in the early parser now
> disable pipelining entirely to avoid potential request smuggling
> situations.
>
> Yes I agree that some of these events are possibly not necessary to
> disable or block. However, pipelining is itself a traffic optimization
> which is largely unused today anyway. So by allowing >0 queued requests
> for any portion of a connections lifetime we are already gaining.
> Disabling or pausing it at the first hint of trouble seems to be the
> right (conservative) way to go until we have more experience and testing
> to prove that enabling it is fine.
>
>
>>
>>> + // XXX: check that pipeline queue length is no more than M
>>> seconds long already.
>>> + // For long-delay queues we need to push back at the client via
>>> leaving things in TCP buffers.
>>> + // By time M all queued objects are already at risk of getting
>>> client-timeout errors.
>> I do not think we should do this, at least not by default, so this is
>> not an XXX. I would remove that comment, but if you want to keep it,
>> please rephrase it as an optional feature. It would be rather difficult
>> for Squid to measure the time those requests spent in TCP buffers and
>> also predict whether the client still wants those requests to go ahead.
>> Some clients would not submit all requests at once and will not timeout
>> submitted requests as long as they keep receiving response(s), for
>> example.
>
> FYI: Using time+count for queue length instead of only count is one of
> the main design models that have come out of the buffer bloat project as
> a major source of efficient high performance in queue management.
>
> As to implementation ... each queued request should be having a
> timestamp in its state data somewhere for time of first read and time of
> headers completion read. I am thinking we can compare one of those
> against current_timestamp and know for the last queued request and know
> how long it has been waiting. That timestamp is just missing or unknown
> to me right now, and is an optimization anyway so its not implemented yet.
>
> I have altered the XXX to a TODO.
>
> Amos
Received on Fri May 24 2013 - 17:26:12 MDT
This archive was generated by hypermail 2.2.0 : Sat May 25 2013 - 12:00:26 MDT