On 7 Sep 2000, at 19:04, Adrian Chadd <adrian@creative.net.au> wrote:
> On Thu, Sep 07, 2000, Andres Kroonmaa wrote:
> > On 6 Sep 2000, at 21:58, Adrian Chadd <adrian@creative.net.au> wrote:
> >
> > > On Mon, Sep 04, 2000, Andres Kroonmaa wrote:
> > > >
> > > > --enable-time-hack option is broken and mostly useless.
> > >
> > > Hrm. That makes sense. How about you throw together a patch and if we're
> > > all in rough agreement (ie noone says anything bad) I'll look at committing
> > > it.
> >
> > Maybe we should make ALARM_UPDATES_TIME define as a default. Maybe we could
> > enclose ALARM_UPDATES_TIME between #if USE_ASYNC_IO in comm*.c
> > But I really don't see any point in current implemenation in comm*.c.
> > To make an ideal one would take some effort. Is it worth it? gettimeofday()
> > should be a fast call these days. It shouldn't even call kernel.
>
> Hrm. What would use the current time? Would be enough to update it once
> at the beginning of the select loop? Or is the fact that the select loop
> could take more than one second to execute might throw things off?
access.log logs millisec times. to measure accurate time, we need to get
current time exactly at the right measurement moment. for eg. when we accept
http connection, and when we close client socket = time to service request.
Anything inbetween can take some time. disk read/write ops can take several
millisecs, disk open/close can take even more.
If we stamp time only at the beginning of poll loop, and then service
20-100 sockets in a row, the last sockets would have quite inaccurate time.
This is the problem to solve. Currently it is solved in a very straightforward
way, we simply call getCurrentTime at every comm_check_incoming_poll_handlers()
which polls incoming fd's with a timeout of 0.
And this comm_check_incoming_poll_handlers() is called at least 3 times per
every single FD operation (3 for read, then 3 for write, then 3 simply for
gods sake, in s2.3 at least). Basically its called 3 times between every
*_handler() calls. It is somewhat reduced by commCheck*Incoming() macros,
so it isn't that bad really.
But when you look at a trace of running squid, it makes you feel that calling
getCurrentTime is most important thing squid is ever doing.
At the time I submitted initial patch, Solaris truss -c (count system calls)
hinted that call to gettimeofday() is extremely expensive, taking upto 1-2ms
of time. I thought it was related to system reading realtime clock hardware
and related user/kernel/user taskswitch.
Now I know that I was mistaken. gettimeofday() is extremely fast call, it
reads only some data from area updated on every system timer tick (10ms).
It appears that truss tries to intercept this call by some means and it is
the overhead of truss that I saw. But of course at that time I was using
much older OS, and on some systems this could really be a kernel call.
Perhaps we should call getCurrentTime only if we get ready FD's from poll()
and not before we call poll().
Anyway, I think we want to have some sort of high-res timer inside squid.
It is quite important to know how fast requests are serviced, and how this
servicetime changes during a day and workloads. this is useful statistics.
I am quite worried about CPU usage of squid. My P3-800 box is doing about
100-150 reqs/sec and its CPU is very loaded. I can't see or find the
source of CPU burning, gprof or other profiling tools are of little help,
they all have only 10ms precision. I wonder if it makes sense to design
a macro that could measure any func with very small overhead and very high
precision. Solaris has got gethrtime() which returns nanosec time with
CPU clocktick precision. It consults on-CPU counter. Most popular platforms
do have something similar. It would be very useful to have cachemgr return
stats table that shows where most of the CPU is burned. Then we could
much better understand what needs more optimisation.
------------------------------------
Andres Kroonmaa <andre@online.ee>
Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Thu Sep 07 2000 - 06:51:42 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:36 MST