On Sun, Feb 25, 2007, Francisco Gimeno wrote:
> Hello again,
>
> as I have seen with oprofile (thanks to Adrian), the most time-consuming
> function is headersEnd. I did a little debug on that, and I have found that
> it's called a lot of times.
Yeah its horrible.
Try comparing squid-2-HEAD's CPU time use to Squid-2.6's - you'll find
headersEnd() isn't the most popular call now.
Henrik and I have written a mostly-complete replacement HTTP parser which
is incremental in just the fashion you've described - I keep a pointer
to the beginning of the unparsed bit and attempt to swallow entire header
entries before updating that pointer. It means if the request is split
over >1 read() I don't have to read the whole reply. Its in my private
CVS tree and not public but, heh, I'll make it public tonight or
tomorrow.
(It also forms the testbed for the replacement string/buffer code which
uses buffer references to do things, rather than lots of string copies
and allocations everywhere. Ugh.)
You're on the right track :) I've just already started along that path.
Help me test the storework branch and iron out the bugs I've introduced.
storework is currently -just- a HTTP proxy (not an FTP proxy!) and its
not caching anything so I wouldn't suggest putting it into production
quite yet. But the sooner we can iron out the bugs in storework the
sooner I can jump into the client-side codebase and replace all of the
HTTP parser and connection code, replacing it with something a little
saner.
So please help. :)
Adrian
Received on Sun Feb 25 2007 - 15:28:43 MST
This archive was generated by hypermail pre-2.1.9 : Thu Mar 01 2007 - 12:00:02 MST