Stephen Baxter wrote:
> I was thinking of using ICP to make sure. This will greatly reduce false
> hits to the point where the most common reason they would happen is from a
> squid mesh having non common refresh patterns.
You are missing the point that false hits should not be a problem, but
dealt with properly.
The 1.2 situation looks like: (not sure if we are 100% there yet..)
* Digests to get a good estimate where the object is
* Persistent HTTP connections between peers to eleminate TCP startup
overhead
* HTTP false hit recognition. Automatically fall back on next possible
HIT peer or act as if it was a MISS. The client never sees that it was a
false hit (unless he inspects the headers).
Persistent HTTP queries is close to ICP queries in both latency and
network load. No big cost difference unless you have to query a lot of
servers, which says that using only HTTP is a win in most situations as
the extra ICP query is eleminated.
> I see the problem with ICP as being its sheer volume and due to this fact
> it does not scale awfully well, on one of our squids we have :
Yes. This is why digests is implemented ;-). Digests have a fixed
network load regardless of the traffic volume.
> We are approaching our work from the point of view that the LAN (same ISP
> squid peering and Internet Exchange) is always fast and no or little
> cost while WAN (peering between ISPs is other regions or between Internet
> Exchanges) is typically fast but not all that cheap to use !
Yet another reason why to use digests for WAN peerings. One time
investment in some extra memory is most likely to be cheaper than WAN
traffic charges.
--- Henrik Nordström Sparetime Squid HackerReceived on Sat Sep 12 1998 - 16:46:27 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:58 MST