On Thu, 6 Aug 1998, John Todd wrote:
> Date: Thu, 06 Aug 1998 11:14:58 -0400
> From: John Todd <jtodd@loligo.com>
> To: Peter Marelas <maral@phase-one.com.au>,
John Cougar <cougar@telstra.net>
> Cc: Stephen Ollis <ollis@att.net.au>,
squid-users@ircache.net
> Subject: Re: Redirection and Load Balancing
>
> At 09:15 PM 8/6/98 +1000, Peter Marelas wrote:
> >
> >On Thu, 6 Aug 1998, John Cougar wrote:
> >
> >> On Tue, 30 Jun 1998, Stephen Ollis wrote:
> >>
> >> > I'm just curious: what does everyone else use for Redirection
> >> > and Load Balancing on a farm of proxy servers?
>
> [snip stuff about Alteon]
>
> >> I've played around with a few different configurations with it, but
> >> haven't yet successfully got it to act as a single point Squid proxy.
> >> What I mean is: it'll load balance TCP/HTTP sessions across multiple
> >> servers fine, but I couldn't get it to forward ICP reliably, since the
> >> proxy ports must be configured along with the proxy IP address.
>
> [snip]
>
> >Given that the alteon switch does not eliminate all single points of failure,
> >I wonder whether a PC running Unix that balances packets between systems
> behind
> >it would be a cheaper alternative. (i.e. Similar to cisco's local director,
> >but not so pricey).
> >--
> >Regards
> >Peter Marelas
>
> I've got a system that works more or less at the moment that has many of
> the features that you describe. I was really trying to get it to market,
> but the slow progress of getting the programs written has discouraged me
> greatly (I'm not a programmer, so I've had to hire things out,) and the
> project has stagnated for the last few months due to other projects I'm
> working on taking precedence.
>
> Anyway, it's a Linux box (32m RAM/P166) with 2 100bt NICs that has been
> stripped down and is running a modified bridge code. All packets on port
> 80 get intercepted and handed off to a very standard squid (with the
> "rewrite header" patch.) Squid then is configured in proxy-only mode.
> Requests for particular URLs are directed towards particular caches all the
> time, with backups configured for each list of URLs or ranges of IP
> addresses. There is no ICP involved. The redirector box actually is what
> the end users are "connecting" to, even though they don't know it. The
> redirector then contacts upstream caches via TCP and requests objects.
>
> The beauty is that this works with ANY cache, and handles failures of
> caches quite gracefully (though it does not understand load issues or
> latency - another "SMOP" that didn't get taken care of.) Additionally, no
> client configuration is necessary, and NO configuration of caches or your
> local network is necessary. You give the box an IP address (for it to make
> outgoing connections) and stick it in the line of traffic - presto!
>
> I'm using PC104+ (P200 with a 2-port SMC 100bt ethernet) so this all fits
> into a box the size of a desk phone. It's running a flash disk, so there's
> no spinning or mechanical parts. The whole system fits into about 4mb of
> space, give-or-take, in it's current form. Total cost to manufacture:
> ~$1500 in a crude box (a custom box would be better, but that's for later.)
>
> What I had also wanted to do (and did design) was a board that did
> failover based on a watchdog timer. Since this system runs in a bridge
> mode, once could (in theory) mechanically connect the two ethernets and
> there would be no interruption of data. A set of 4PDT mercury-wetted
> relays, linked to the watchdog, would handle this process. In the instance
> of a hardware failure, the system would instantly go to "passthrough" mode.
> You'd lose current connections, but all future links would go through
> unmolested. There still is the "single point of failure" problem, but this
> may be acceptable to smaller ISPs who have lots of those, anyway. This
> failover board would increase the price by $200 or so.
>
> Before you start barraging me with "It won't work because..." problems,
> remember that this is a leaf-node box and not a core redirector. My
> planned throughput for total traffic is a T3, and of that, only about
> 30mbps would be web. This is for end-user dial or leased lines - it's not
> an accellerator for web farms. I'm not proposing that this is the ultimate
> solution - it's a stopgap, and a cheap one, at that.
>
> Preliminary tests have shown that the bridge code on the Linux box can
> handle ~70mbps of raw traffic and at least 7mbps of HTTP traffic from
> clients (aggregate to/from) - probably much more, but 7mbps was the limit
> of the testbed at the time (eg: 10mb ethernet starts crapping out past that
> point.)
>
> Anyway, the hope of this post is that someone else will have lightbulbs
> go off over their heads. I'm still convinced this is a good idea, but I
> also know that it's going to be some time before I get around to this
> project again and it's needed Right Now. Anyone interested in doing some
> programming? :)
>
In some ways its already been done but its a commercial product.
IBM produced eNetwork Dispatcher.
See http://www.software.ibm.com/enetwork/dispatcher/
If i were to implement a clustered solution today, this is what
I would use because:
a) It has high availability built in, so it eliminates the single box
from becoming a single point of failure to your cluster members.
b) The load balancing constraints used to choose the right cluster member
can be tuned to your liking.
c) The NIC's of all cluster members are utilised when transmitting data
back to the clients.
d) It's not limited to LAN's in that cluster's can span over WAN's.
To read about how it works look at
ftp://ftp.software.ibm.com/software/enetwork/dispatcher/whitepapers/end20wp.pdf
As you say i think there is big demand for a similar product in the
freeware arena. Im willing to work on it if others are, but I would
certainly look at following eNetwork Dispatchers techniques.
-- Regards Peter MarelasReceived on Thu Aug 06 1998 - 17:22:00 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:27 MST