Alex Rousskov wrote:
> I bet that the benefits from digests justify a few loops.
Perhaps. But not with the current implementation where Squid
falls back on ICP if it is a miss. If we are to use digests then use
them to their full extent and not only as a way to get even faster hits.
> Besides, to make digests work better, we need a lot of people to
> actually use them.
True. But if people are to use them then I think that at least the
things that was noted as MUST in both your digest paper and our to-do
for Squid 1.2 should be implemented first.
a) false hit recovery. If I am not mistaken Duane was set to implement
this, and did a good job at the needed prerequisites in forward.c (retry
a failed request) and Cache-Control: only-if-cached. But it is
apparently not used the way it should, or loops should not even occur
unless configured (parent <-> parent relation).
b) ICP elimination by using digests. ICP should not be used to a peer
where we have a valid digest. It if is a sibling then a digest miss is a
miss. If it is a parent then some other measure than ICP MISS is
required to select which parent to use on a full miss. Have a option to
enable ICP queries on misses if you like, but it should definitely not
be the default when a digest is available.
The goal of digests was (is?) to both minimize the network load and the
latency. Falling back on ICP on misses accomplishes neither of these
goals as misses are a fairly large percentage of the total traffic.
Regarding false hit recovery:
We have
* Code to restart failed requests
* Cache-Control: only-if-cached to stop us from triggering a sibling to
fetch the object.
* miss_access that also denies miss access
We don't have
* A lists of peers and hosts where the requests should be tried from
forward.c. Only a single IP address is given to forward.c. It should get
a complete list of peer addresses and origin site addresses and cycle
through them on each retry.
* Detection and restart of requests that failed due to either
miss_access, only-if-cached or other cases where we do get a reply and
should retry the request at another peer and/or origin server address.
We also don't have
* Restarts of timed out connect attempts. With proper restarts as given
above we can use a quite aggressive timeout policy on the first try,
much like how modern browsers do it (with the exception of Netscape
Unix).
I do beleive that digests is the way to go, but the current use of
digests in Squid is far from optimal, on the edge to being of no benefit
at all.
> You did not suggest to turn FTP support off
> when it was buggy. :-/
I have suggested people to disable FTP in various ways as workarounds to
different problems, but usually I rather provide a fix at the same time.
/Henrik
Received on Tue Jul 29 2003 - 13:15:54 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:57 MST