I set about implementing what I thought was a solution to the annoying delay
pools problem, only to decide it wasn't really a solution after all.
The problem:
when we scan file descriptors, we go from first to last, and just read
any that are OK. in delay pools, the lowest numbered file descriptors
always 'win' the traffic.
Many options present themselves as solutions but are not good solutions:
* move the starting point by 1 every read cycle
--> if all the file descriptors for one client are clumped together
(eg, netscape made 4 connections at once), then we still favour
the first in the clump and starve the rest.
* read the least recently read file descriptor first
--> but what if the last read of the most recently read file descriptor
was a 1-byte read? not very fair.
--> we will end up scanning file descriptors in alternating sequences,
and starving the middle ones.
* for each delay pool keep an attacher count and allocate size/count bytes
--> need to actually allocate size_at_start_of_commselect/count bytes.
--> need to copy entire delay pool array each commselect loop.
--> need 3 times the storage for delay pool arrays.
* use a complex varying set of scanning orders
--> may actually work.
--> but what set to use?
I think the last might end up being the only option, but I hope for more.
It doesn't seem an easy solution to me. To be fair we'd want to scan all
permutations of the number of file descriptors open and not in an ordered
manner.
This leads to a new option:
* for each delayed file descriptor ready for reading, allocate a random()
value to it. qsort() the file descriptors based on these values and
read them in this order.
This appears to me to be the best, cheapest and fairest option (and close
to the bits of work I've already done when I was going to sort by last
read time...:-).
Comments, please...
David.
Received on Tue Jul 29 2003 - 13:15:59 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:15 MST