On Tue, 20 Oct 1998, Niall Doherty wrote:
> > A peerDigestValidate is queued right after the first access to the
> > corresponding peer. The queuing delay depends on the number of peers that
> > are already queued for validation (so we do not fetch all the digests at the
> > same time)...
>
> So you "validate" digests you have received from peers ? What does
> validate mean in this respect ? What happens if an entry is not "valid"
> according to our refresh rules ? Do you have a bit for each entry in
> the peers' digest that says whether the entry is valid or not ?
PeerDigestValidate function decides if it is time to fetch a fresh digest
from a peer. Currently, the decision is based solemnly on expiration info in
peer digest's headers.
> > Store (local) digest is always built from scratch on startup after
> > store_rebuild completes. The in-memory copy is always identical to the
> > disk-resident copy except for a [short] rebuild period.
>
> It is built after the swap index is read from disk and held in RAM ?
Yes, I think so.
> Ok - but what I was getting at was that stopping/starting squid and not
> leaving it running for too long in between (i.e. a couple of minutes)
> would have caused almost identical [local] digests to be created ?
Yes. If "identical" == "same objects are considered for digestion".
> This way I an change the refresh_patterns and when the digests are
> created I can "sensibly" compare the results... ?
Yes.
> What refresh_pattern to you have set for you caches, BTW ?
refresh_pattern . 0 20% 4320
> Have any other people (commercial or otherwise) expressed interest in
> doing work on Cache Digests or are you guys alone ?
There is interest out there; we are not alone. We do need to put an Internet
Draft out to speedup the global Web digestion though.
Alex.
Received on Tue Oct 20 1998 - 14:54:35 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:36 MST