Amos Jeffries <squid3_at_treenet.co.nz> writes:
> On 24/01/2013 1:50 p.m., Alex Rousskov wrote:
[event queue]
>> Is storing 60'000 events instead of 100 acceptable? I kind of doubt it
>> is... :-(
>
> I agree.
>
> Particularly since there are ~10 other _types_ of event sharing the
> queue. With varying length of timeout on each type. This means that
> add/remove operations they are doing on their own (currently
> add-and-remove models) will get proportionally more costly as the
> queue size increases.
> NP: converting all the users to add-and-forget will consume much more
> memory, AND will break that assumption of "generally add is to the end
> of the queue" which is actually not true now anyway. Generally add is
> to position N-6 of the current queue in front of MemPools garbage
> collection, disk swap.state cleanup, DelayPools garbage collection,
> store Digest rebuild, store Digest fetch, peer ICP ping. Maybe in
> front of a large set of IdlePool timeouts as well.
>
> The different length of timer values on each event type above means we
> cannot easily convert the list type to dlink_list. At least a full
> analysis of the queue usage. Possibly implementation of both front and
> back push ops switched by a check for which end is best to inject from
> (more cycles spent doing the check etc).
As a somewhat more general remark to that: There are two obvious
alternative implementations, namely
- use a timing wheel, ie, a 'circular' hash table where new events are
linked onto the list at (head + interval) % wheel_size. That would
be the traditional kernel implementation of this facility (That's
from what I remember from reading about that. I've never used it
myself)
- use a differently implemented priority queue. The usual 'obvious
choice' would be a binary heap stored in an array
Personally, I've long since stopped using linked lists for 'timers'
because implementing the second alternative instead is fairly
simple[*]. It also needs less memory than any kind of 'linked'
tree structure (one pointer per entry).
[*] For some weird reason, Wikipedia doesn't (or didn't use to) know
how deletion from such a datastructure works, but that's not really a
problem (I admit that I found a suggestion for a working algorithm
without much effort when I needed that for the first time before being
forced to design one entirely on my own).
Received on Thu Jan 24 2013 - 12:57:17 MST
This archive was generated by hypermail 2.2.0 : Thu Jan 24 2013 - 12:00:08 MST