--MimeMultipartBoundary
Content-Type: text/plain; charset=us-ascii
> Well, to detect that thread is idle on a mutex, scheduler has to do some
> work, things just doesn't happen byitself ;) even if the overhead is neglible
> I still don't see the point in keeping idle threads around.
A mutex as its currently implemented on modern day systems has a
list of tasks attached to it that are blocked on that mutex. These tasks
don't live on the regular run queue. There is no searching. When the mutex
is released, all the tasks on the mutex sleep queue are awoken, and they all
race again to catch the mutex. This MAY seem bad, but if the time quantum
slice is larger than the critical area (it usually would be), then each task
will have grabbed the mutex, done the job, and released it, before the
scheduler selects this next task on the mutex wait queue to run.
The overhead is VERY small. Keeping idle tasks costs next to nothing
for the kernel apart from accounting space overhead (ie. almost nothing).
Keeping idle threads around allows them to be started quickly because you
don't need to go to the trouble to recreate a new thread. Its much faster
to start up an idle thread than to create a new one. Besides, the creation
of the new thread causes the parent to do more work, whereas the idle thread
just wakes up when the scheduler's ready, therefore better context utilisation.
>
> > > Don't be sure. even "Count++" is not atomic and can trash alot if
> > > not surrounded by mutexes.
> >
> > It doesn't matter if it's not atomic, as long as the write is
> > atomic. There's only one writer so when doing
> > x = var
> > ++x
> > var = x <----- atomic
> >
> > only the last line needs to be atomic, and I don't know of any
> > machines that allow a context switch inbetween writing the bytes of a
> > word variable. :)
>
> var=3
>
> CPU1 CPU2
> x = var (3) ...
> x++ (4) x=var(3)
> var = x (4) x++ (4)
> ... var = x (4)
>
> You've got var wrong. Also applies for 1 CPU with thread switch after x++
> Of course, as you insist on single writer, this would is avoided.
Andres, we are all perfectly well versed with race condition theory.
You are missing the point here in that we are not talking about a multiple
writer/reader model. We are talking about a SINGLE writer model, whereby
the readers can just poll a value AND THEY DON'T CARE is they miss it the
first time, 'cos they'll catch up next time 'round.
I implemented the asyncio stuff here using exactly the same model.
It fits for squid. I know.
> OK, I missed that. But it's quite hard to setup and would add other
> kind of overhead, IMO. Also, this doesn't scale well on Cray ;)
> Why I dislike single writer is because it is most simple and straightforward
> to code threads from accept() to the last close() as independant and
> all equal, all searching and updating metadata as needed, without a need
> for mama-thread... or even worse - mama-process...
> How would you arrange single writer to index database, when you can
> add an object to it only after successful connect is made and thus service
> thread long running already? Would you pass the object along to mama
> via IPC only to make an linkedlist update? how would you lock mem object?
> unlock it later? You'd need to keep track of every and all threads in mama,
> instead of fire-and-forget...
Yes. The parent thread knows about all sub-threads. BUT, it is still
fire-and-forget MOSTLY. The parent thread creates the context, it fires off
the subthread/process, and is notified of completion, at which point it
attaches
the memory object. The memory objects that are "in-transit" are tracked by
the parent thread and can be piggy backed onto by readers quite happily. They
just read the state of the single writer. The memory object doesn't exist
as part of the regular index until it's complete. It doesn't need to. It
just needs to be a shared piece of memory that only one writer will write to,
and which many readers can read from.
Objects only come in at a rate of 100/sec absolute max. Written
well, if multiple children finished at once, the parent thread just attaches
the multiple objects in a single time-slice quantum, still retaining low
context overhead efficiency.
I fail to see why you're arguing so hard against this. Understand
the model please, before attacking it.
> Somebody could test, how OS would behave when having about 2000
> processes doing shared mem locks and trying to do something useful.
> You see, when you have very many processes, kernel has to do much
> more work in scheduler to find next runnable process. Context switches/sec
> are not the only killer...
Not true. Modern OS's have prioritised run queues. The first process
on the highest priority run queue is run, when it's time quanta is up, its
new priority is calculated, and it is then placed at the end of that run queue.
The OS then runs the first process on the ighest priority run queue, and so on.
Finding a runnable process is next to linear, regardless of the number of
processes involved. Having too many processes means the problem is getting
through them all fast enough such that there is a decent interactive response
time, that's the issue.
> > > > Mixed process/thread model
> > > most complex.
> >
> > Note that as far as I can see, there is very little code difference
> > between the process / thread /kernel thread implimentations. If you
> > code for the hardest (process), then the rest will be trivial in the
> > extreme.
>
> I'd say process/thread models are quite different and it is not trivial
> to switch unless written threads in mind.
> Well, actually I see, and this I meant with this mixed model is that when
> writing threaded model, for Linux you could define the process being
> threads. Then the rest is similar alot.
I was thinking that the process mdoel is written. Each process is
simply a self-contained instance of a thread. Just a bit heavier. If you
want to have threads, the code is simply written to have support for running
multiple threads within a process. It is NOT hard to do this once you solve
the locking/multi-processing issue for processes.
> BTW, there's one model not mentioned:
> Mixed process/select() model
> I'm afraid this would be most complex to make right, unless each
> process is made totally independant from the others and processes
> only share common knowledge about disk contents and each process
> handles its private cache directory.
I thought we were trying to move away from using select() at all.
This model was ignored because is contains the very element we don't want.
Stew.
--MimeMultipartBoundary--
Received on Tue Jul 29 2003 - 13:15:43 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:26 MST