"Andres Kroonmaa" <andre@ml.ee> writes:
> Ideally, imho, there should run multiple processes with shared index,
> each serving multiple concurrent sessions. If configured to use either
> zillions-single-thread-processes it might work well on OS'es that have
> no thread support or better perprocess memory usage, while
> other OS'es that love threads more than processes can run as
> zillions-threads-in-few-processes.
Yup. agreed 100%.
> > The context switching shouldn't be too bad given that the vast majory
> > of the time, the context switch should take place when a process is
> > sleeping, so it's already switched context into the kernel anyway.
>
> IPC? Shared memory locks, all add process switches. Although this
> might be small overhead.
The IPC should be very minimal, all via shared memory. The locking
model I see as something like...
while (test_and_set(&var))
select(0, NULL, NULL, NULL, {0, 11000} ); /* sleep
11ms */
blah
reset(&var);
The issue being that because we'd normally hold the lock for such a
short time, the sleep would run VERY rarely.
> > I guess I'm not convinced. Kernel threads (at least in sunos and
> > linux) are very close to the process weight anyway. I suspect this may
> > be a religious issue. :)
>
> Well, depends. thread switch of kernel threads is comparable, but not
> thread creation/exiti. threads are much faster here.
Either way, you don't want to be creating and destroying threads all
the time. You want to start a bunch, and let them sit in a pool when
they've finished.
> Very nice feature for client reads when hits are serviced is to
> mmap diskfile to memory and then simply issue full write to client from
> this memory in one go of object size. No kernel-userspace buffer copies.
> Kernel does file io and socket writes.
Yes, this is pretty neat, particularly when the OS notices what you're
doing.
> Not only a little, you'd need to lock every shared structure that can
> change for both - writing AND reading. Assumption that while we read
> multi-item struct the thread (process) switch would NOT occur is far from
> from unlikely and unless this struct is 100% read-only, possibility of
> data corruption is present. Consider hardware interrupts.
This is a key question. Can you do without the read locking? If you
make sure you always update thing in-order, you should be able to do
it without large atomic updates, and without locking.
I.e. if hash chains are searched front to back, you update the forward
pointers before the back pointers. etc.
> > I really don't see the locking as a very serious issue.
>
> Lock misses are cheap, blocks on locks may be not. Lock bottlenecks
> becomes an issue, thus the more locks, the more concurrency and
> efficiency.
Implicit rule: You can't hold a lock, and do a slow operation. :)
Implicit rule: Locks are per object where possible. think lots of
cheap locks.
> > > > Possibility of instant startup. i.e. mmap() the index, and
> > > > start using it immediately.
>
> Slow. First miss wants to sweep the whole mmaped area, thus you see spike
> of pageins at which time anything other that reminds squid is stalled.
Only one process blocks, the others stay running. And you could fire
off a seperate process to just read the file into memory.
> > > Therads avoids this completely. All memory is automatically shared
> > > if you so wish.
> >
> > Yes and no. You do get advantages from true private resources. I'll
> > grant that threads do avoid some of the complexities, but do it by
> > trading off on the safety issues.
>
> What do you mean by safety tradeoffs?
Resource leaks are mostly going to be per process, and a bug in
operations on private data isn't going to stuff something elses
private data.
Michael.
Received on Tue Jul 29 2003 - 13:15:43 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:26 MST