On Thu, 2003-06-26 at 08:41, Henrik Nordstrom wrote:
> What is a viable approach is to add a second database for this purpose
> in parallell to Squid, keeping track of the URLs in the cache. This
> database can be built automatically by tracking the store.log log
> file and feeding the data into a database of choice. For tracking the
> store.log file the per File::Tail module is very suitable, but some
> database design is probably needed to get a database which can be
> searched in interesting ways.
Another viable approach is to build / find and reuse a non-blocking (via
the squid io logic probably - or via pipes to a another process) a
pageable Trie, and insert and remove URL's there as squid does it's work
that would allow a fixed size memory usage (say 4Kb per page, 1000 pages
would probably be enough for good locality of reference), and not slow
squid appreciably. It's definately not a 3.0 feature though - we already
have many big changes, and getting the current migrated code stable will
be enough on it's own.
Rob
-- GPG key available at: <http://members.aardvark.net.au/lifeless/keys.txt>.
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:20:09 MST