We recently started seeing connections to squid dropped on certain
clients doing POST requests. This was tracked down by one of our
developers as the following:
On 15 Oct 1998, Tom May wrote:
> It is a squid bug. Or linux is being overly pedantic. My browser is
> setting content length to 1980 bytes, but writing 1982 bytes. It
> probably adds crlf to the end. Squid is only reading 1980 bytes.
> Since 2 bytes are left unread on the connection, linux sends a reset
> when squid closes the socket to indicate to the browser that not all
> the data was read by squid. This behaviour is new. From tcp_close()
> in /usr/src/linux/net/ipv4/tcp.c:
>
> /* As outlined in draft-ietf-tcpimpl-prob-03.txt, section
> * 3.10, we send a RST here because data was lost. To
> * witness the awful effects of the old behavior of always
> * doing a FIN, run an older 2.1.x kernel or 2.0.x, start
> * a bulk GET in an FTP client, suspend the process, wait
> * for the client to advertise a zero window, then kill -9
> * the FTP client, wheee... Note: timeout is always zero
> * in such a case.
> */
It seems only to happen when the request body is broken out across
multiple packets.
Read a couple extra bytes in read_post_request, or does a more
Right Thing come to mind?
-- Paul Phillips | Love is a wild snowmobile ride across a frozen lake that Everyman | hits a patch of glare ice and flips, pinning you beneath <paulp@go2net.com> | it. At night, the ice weasels come. -- Matt Groening +1 206 447 1595 |--------* http://www.go2net.com/people/paulp/ *--------Received on Thu Oct 15 1998 - 17:50:53 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:31 MST