On 08/02/2013 04:30 AM, Amos Jeffries wrote:
> This patch lays the groundwork for HTTP/2.0 support in Squid:
>
> * registers PRI method for detection of the HTTP/2.0 "magic" connection
> header when expecting HTTP/1.1 traffic
>
> * extend the request line HTTP-version's accepted by the parser to
> include "HTTP/2.0".
>
> * reject 2.0 traffic received by reverse- or forward-proxy ports and log
> "error:http-2.0-not-supported" in access.log.
>
> * reject "HTTP/2.0" version label or "PRI" method in any use other than
> a complete HTTP/2.0 magic connection header.
>
> * change delivered error page to the "Method Not Allowed" response,
> indicating the PRI magic "method" is rejected by this HTTP/1.1 proxy.
>
> * intercepted HTTP/2.0 traffic is relayed to ORIGINAL_DST or a
> cache_peer transparently.
>
> Note that HTTP/2.0 traffic relaying can be prevented by defining an ACL
> for method PRI and limited by http_access controls in the same way as
> method CONNECT. cache_peer_access selection can also make use of the PRI
> method in ACLs to control tunneling to peers.
>
> Also, fix bug 3371 (CONNECT payload data sent in the first packet being
> dropped by Squid) as a necessary step towards relaying the HTTP/2.0
> frames which are expected to be pushed in by the client immediately
> after the HTTP/2.0 magic header.
>
> Also, add URL parsing and generation support for "*" URL in
> request-lines of PRI, OPTIONS and TRACE methods. This is only updating
> the URL internal code for OPTIONS and TRACE, the remainder of support
> necessary to cope with that URL special case according to HTTP/1.1
> requirements is still missing and outside the scope of this patch.
> + /* deny "PRI * HTTP/2.0" via non-intercepted ports */
> + bool isHttp20Magic = (*method_p == Http::METHOD_PRI &&
> + *http_ver == Http::ProtocolVersion(2,0) &&
> + memcmp(&hp->buf[hp->hdr_end+1], "SM\r\n\r\n", 6) == 0
> +
I very much doubt peeking 6 bytes beyond hp->buf[hp->hdr_end+1] is safe
without checking the buffer boundary. It is most likely conceptually
wrong even when that does not lead to looking beyond the allocated buffer.
Perhaps more importantly, the way you parse PRI requests may violate
draft-ietf-httpbis-http2-04 that seems to require that those requests
start with a specific 24-octet magic, not with a general HTTP1 header
followed by the 6-octet magic. Why not parse HTTP2 Connection Header
correctly, without dealing with the apparently inappropriate HTTP1
request parser at all??
> + memmove(const_cast<char*>(&hp->buf[hp->hdr_end+1]), &hp->buf[hp->hdr_end + 6 +1], hp->bufsiz - hp->hdr_end - 6);
> + hp->bufsiz -= 6;
Similar "access outside of allocated buffer" concerns here, but now we
are not only reading but writing. And (hp->bufsiz - hp->hdr_end - 6) may
be negative.
> + memcmp(&hp->buf[hp->hdr_end+1], "SM\r\n\r\n", 6) == 0);
> + memmove(const_cast<char*>(&hp->buf[hp->hdr_end+1]), &hp->buf[hp->hdr_end + 6 +1], hp->bufsiz - hp->hdr_end
> + hp->bufsiz -= 6;
Please name the magic integer 6 since it is used more than once. It may
be a good idea to name the magic string as well, but that is optional if
it is used only once.
> + memmove(const_cast<char*>(&hp->buf[hp->hdr_end+1]), &hp->buf[hp->hdr_end + 6 +1], hp->bufsiz - hp->hdr_end
> + hp->bufsiz -= 6;
> +
> + // we only support tunneling intercepted HTTP/2.0 traffic for now
> + if (csd->port && !csd->port->flags.isIntercepted()) {
> + hp->request_parse_status = Http::scMethodNotAllowed;
> + return parseHttpRequestAbort(csd, "error:http-2.0-not-supported");
> + }
It is probably better to bail _before_ we manipulate the unsupported
message with memmove().
> + /* deny "PRI * HTTP/2.0" via non-intercepted ports */
> + bool isHttp20Magic = (*method_p == Http::METHOD_PRI &&
> + *http_ver == Http::ProtocolVersion(2,0) &&
> + memcmp(&hp->buf[hp->hdr_end+1], "SM\r\n\r\n", 6) == 0);
> + if (isHttp20Magic) {
> + } else if (*method_p == Http::METHOD_PRI || *http_ver == Http::ProtocolVersion(2,0)) {
> + }
> +
Why prohibit PRI HTTP/1.1 requests? That method was not reserved and,
hence, ought to be a valid extension method for HTTP1. I suggest
restructuring this:
if (*http_ver == Http::ProtocolVersion(2,0)) {
if (magic) {
} else {
}
}
> + } else if (*method_p == Http::METHOD_PRI || *http_ver == Http::ProtocolVersion(2,0)) {
> + // other uses of PRI method or HTTP/2.0 version moniker are not supported
> + hp->request_parse_status = Http::scMethodNotAllowed;
> + return parseHttpRequestAbort(csd, "error:method-not-allowed");
> + }
The error message seems wrong in case of a PRI method -- we do support it.
> - prepareTransparentURL(csd, http, url, req_hdr);
> + if (isHttp20Magic)
> + http->uri = xstrdup(url);
> + else
> + prepareTransparentURL(csd, http, url, req_hdr);
Can this logic go inside prepareTransparentURL()? There you can check
for 2.0 version of the protocol (we reject all non-isHttp20Magic HTTP2
requests).
> + // Let tunneling code be fully responsible for CONNECT and PRI requests
> + if (http->request->method == Http::METHOD_CONNECT || http->request->method == Http::METHOD_PRI) {
and
> - if (request->method == Http::METHOD_CONNECT && !redirect.status) {
> + if (!redirect.status && (request->method == Http::METHOD_CONNECT || request->method == Http::METHOD_PRI)) {
Since HTTP1 PRI requests are valid but should not be tunneled, the new
logic here should be based on version being 2.0 rather than the method
being PRI, I guess.
More changes may be needed to get this right, depending on how we parse
and validate HTTP2 requests.
> +
> + // consume header early so that tunnel gets just the body
> + connNoteUseOfBuffer(conn, http->req_sz);
> + notedUseOfBuffer = true;
> }
Can we do the above for all non-FTP requests and remove
notedUseOfBuffer? It looks like all non-FTP paths that reach this point
call connNoteUseOfBuffer() now.
> + // We just need to ensure the bytes from ConnStateData are in client.buf already
> + memcpy(tunnelState->client.buf, tunnelState->http->getConn()->in.buf, tunnelState->http->getConn()->in.notYetUsed);
Two problems here:
1) This is very unsafe. in.notYetUsed can be almost as big as the
admin-configurable client_request_buffer_max_size, while
tunnelState->client.buf is just SQUID_TCP_SO_RCVBUF. While nothing bad
should happen by default because tunnelState->http->getConn()->in.buf
starts small, a "proper" sequence of requests before CONNECT can grow it.
To properly fix bug 3371, you will either need to enlarge (if needed)
tunnelState->client.buf dynamically before copying (easy) or drain
tunnelState->http->getConn()->in.buf slowly (lower RAM usage).
2) At memcpy() time, tunnelState->client.buf may already contain data
from the server. Note the tunnelState->copy() call above.
You probably need to move the new memcpy-related code before the
tunnelState->server.len check, but I am not sure that would be
correct/sufficient. I am concerned what would happen when both the
server has data to sent to the client and the client has data to send to
the server. I am not 100% sure tunnel.cc code can handle that.
Also, please store tunnelState->http->getConn()->in in a local reference
variable to avoid repeating that expression many times in
tunnelStartShoveling().
> /// Called when we are done writing CONNECT request to a peer.
> static void
> tunnelConnectReqWriteDone(const Comm::ConnectionPointer &conn, char *buf, size_t size, comm_err_t flag, int xerrno
> {
> TunnelStateData *tunnelState = (TunnelStateData *)data;
> debugs(26, 3, conn << ", flag=" << flag);
> - assert(tunnelState->waitingForConnectRequest());
Why was this assertion removed while the description of the function
remained unchanged? They go together.
> + // send full HTTP/2.0 magic connection header to server
> + mb.Printf("PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n");
Please use mb.append, not Printf(). If my parsing suggestion above is
implemented, this will be a named constant.
> + debugs(11, 2, "Tunnel server REQUEST: " << tunnelState->server.conn << ":\n----------\n" <<
> + StringArea(mb.content(), mb.contentSize()) << "\n----------");
Please capitalize server as "Server", for consistency with other similar
messages:
> ./http.cc: debugs(11, 2, "HTTP Server REPLY:\n---------\n" << readBuf->
> ./http.cc: debugs(11, 2, "HTTP Server REQUEST:\n---------\n" << mb.buf << "
> ./client_side.cc: debugs(11, 2, "HTTP Client REPLY:\n---------\n" << mb->bu
> ./client_side.cc: debugs(11, 2, "HTTP Client REQUEST:\n---------\n" << (hp-
Needless to say, somebody should add a function to format all these, but
that is outside your patch scope.
> // we have to eat the connect response from the peer (so that the client
> // does not see it) and only then start shoveling data to the client
...
> -
> - assert(tunnelState->waitingForConnectExchange());
Why was this assertion removed? It seems to go together with the comment
above it (which was not adjusted).
> + if (request->urlpath == "*")
> + return (request->canonical = xstrdup(request->urlpath.termedBuf()));
> + // else fall through to default
Please set urlbuf and break instead of duplicating this pretty important
canonical=xstrdup() code and returning immediately.
> + memcpy(buf, "*", 2);
Please use "*\0" above to avoid doubts that the termination character is
always there and copying it is intentional.
> + case Http::METHOD_OPTIONS:
> + case Http::METHOD_TRACE:
> + case Http::METHOD_PRI:
> + if (request->urlpath == "*") {
> + memcpy(buf, "*", 2);
> + return buf;
> + }
Here again, I would recommend breaking instead of returning so that the
post-formatting code can do its job (even though it currently does
nothing for * URLs).
> + // we support HTTP/2.0 magic PRI requests
> + if (r->method == Http::METHOD_PRI)
> + return (r->urlpath == "*");
> +
This will block HTTP1 PRI requests, which we should continue to support
and will allow HTTP2 non-PRI requests which we do not want to support
for now.
* If you have not checked that the new code works with intercepted SSL
traffic forwarded to an origin server when ssl_bump none matches, please
do. That condition exercises the tunnel path you are changing using a
fake CONNECT request.
* If you have not checked that the new code works with CONNECT traffic
forwarded to an SSL peer, please do. That condition exercises the tunnel
path you are changing when the CONNECT response from the SSL peer is
forwarded to the client. Please test with a simple CONNECT request and a
CONNECT request that has some data attached after it (bug 3371).
* Will everything work correctly if an HTTP2 PRI request is sent over a
bumped connection?
> + debugs(26, 9, "\n----------\n" << StringArea(from.buf, from.len) << "\n----------");
> + debugs(26, 9, "Tunnel PUSH Payload: \n" << StringArea(tunnelState->http->getConn()->in.buf, tunnelState->http->getConn()->in.notYetUsed) << "\n----------");
Please remove these (and any other payload dumping debugs if I missed
them). Today, ALL,9 logs are essential for triage in many cases and they
are huge enough without payload dumps. If we start dumping payload like
that, they will become unusable in many cases.
We may have agreed that level-9 debugging is for data dumping. However,
the current code uses that level for other things _and_ any current
dumping is limited to rare "interesting" or "important" cases. This
keeps ALL,9 usable. The above dumps violate that principle.
Also, the correct way to debug payload is the Raw() API from Debug.h,
not StringArea.
> if (cbdataReferenceValid(tunnelState)) {
> +
> if (!tunnelState->server.len)
Please remove non-changes.
Thank you,
Alex.
Received on Thu Aug 22 2013 - 17:11:22 MDT
This archive was generated by hypermail 2.2.0 : Fri Aug 23 2013 - 12:01:12 MDT