On Fri, 2003-01-03 at 00:10, Andres Kroonmaa wrote:
>  If I recall right, cond_wait is boolean, and, if many threads are
>  blocked on cond, and cond_signal does not make thread switch, and
>  another cond_signal is sent, only 1 thread would be unblocked
>  eventually. 
Nope. cond_signal is 1 or more releases.
http://www.opengroup.org/onlinepubs/007908799/xsh/pthread_cond_signal.html
I've the posix spec around here somewhere and its the same from memory.
> I assume that mutex is put into to-be-locked state upon
>  first cond_signal (owner unspecified, but one of threads on wait),
>  and second attempt to signal would cause solid thread switch to
>  consume first cond (because mutex is "locked"). 
Huh? The state should be (slightly compressed):
thread S,          W1,                  W2
active          B-waitingforsignal     B-waitingforsignal
cond_signal()   B-inpthread_mutex_lock B-waitingforsignal
cond_signal()   B-inpthread_mutex_lock B-inpthread_mutex_lock
mutex_unlock()  active                 B-inpthread_mutex_lock
active          mutex_unlock()         active
active          active                 active
>  Basically, we have 2 probs to solve. 1) we need reliable and least
>  overhead kickstart of aufs IO threads at the end of comm_poll run.
Ok, whats high overhead in the current model? I'll check the code in
detail in the weekend.
>  Poll can return immediately without running scheduler if there are
>  FDs ready. Forcibly blocking in poll would cause lost systick for
>  network io. Therefore I think we'd need to think of some other
>  way to get io-threads running before going into poll. We only
>  need to make sure io-threads have grabbed their job and are on
>  cpu queue. 
Ah. Well, what about:
pthread_mutex_lock(&queuemutex)
do {
schedule_io_request()
pthread_cond_signal()
} for <count> requests
pthread_mutex_unlock(&queuemutex)
pthread_mutex_lock(&queuemutex)
pthread_mutex_unlock(&queuemutex)
Now, if mutex_unlock does *not* immediately transfer the mutex to the
waiting threads (which it could do without context switching to the
recipient), mutex_lock might grab the mutex again immediately. I'd
really hope that no OS did that :].
If one does, then:
pthread_mutex_lock(&queuemutex)
pthread_mutex_lock(&signalmutex)
do {
schedule_io_request()
pthread_cond_signal()
} for <count> requests
pthread_mutex_unlock(&queuemutex)
pthread_cond_wait(&signalcond,&signalmutex)
pthread_mutex_unlock(&signalmutex)
And in the dequeue routine:
...
  extract request from pthread boundary queue.
  decrement queue length counter
  if (queue length counter is 0) {
    pthread_mutex_lock(&signalmutex)
    pthread_cond_signal(&signalcond)
    pthread_mutex_unlock(&signalmutex)
  } 
  pthread_mutex_unlock(&queuemutex)
This would block the main thread until all kicked worker threads had
extracted their job. It's a bit more overhead than is optimal though -
particularly for the busy case where squid performs ok today.
>  2) We need semi-reliable and least latency notification of aio
>  completion when poll is blocking. The latter one probably more
>  important. Could the pipe FD do the trick? Signal would, but at
>  high loads it would cause alot of overhead.
I like the pipe idea.
Rob
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:19:05 MST