The Netscape way of load blanacing would be a problem. However, the
only problem I can see with the "correct" way of doing things is
making sure that Squid stores the host name associated with the oject
is actually a host name, rather than an IP address. Of course, that
would require additional DNS lookups at various places.
Some time ago Oskar Pearson said:
> What about the following too:
>
> When multiple dns entries are returned, cache them all identically,
> my reasoning is as follows - when the new version of Netscape came out,
> they had a random load balancing system to keep their servers sane.
>
> BUT - they had about 20 servers, so, while one of our users downloaded
> netscape from ftp1, 19 other people were downloading it from the
> other servers... This problem can only get worse :(
>
> Fixing this is going to be hard to implement because:
> Netscape does it's load balancing in a strange way: they set a very
> low TTL on their records, and then rotate their records on their
> name server. (ie - if you find out the address for ftp.netscape.com now,
> it may give you two records, but if you do it five minutes later, you
> will get a different address...
>
> You could (I suppose) get it to cache all DNS requests for something like
> 24 hours :) (Evil grin ;) and only re-lookup if you issue a reload command.
>
> (Yes I know that there are all sorts of bad points about not following
> RFC's reguarding DNS lookups etc, but please don't flame me!)
>
> On the other hand, something like www.microsoft.com works on a much
> easier system to cache:
> When you ask for www.microsoft.com, it returns 16 different IP addresses...
>
> It seems that squid would store the files with an algorithm based on
> the IP address?
[snip]
-- Eric Wieling Advanced Network Research InterCommerce Corporation Pager: 800-758-3680 The world needs no help seeing a fool for what they are.Received on Wed Aug 21 1996 - 03:15:50 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:50 MST