Agreed..in principle. However..
Here's our situation. As an ISP, we started with a single 14k4 modem (at
that time, it cost us many thousands of dollars, and was acceptable, even
when shared among a large number of users. Of course, we didn't have the
WWW then, either). Then we ramped up to a 64K link, then 128K, now 256K.
At the same time 'infrastructure' bandwidth has been on the increase as
well. From a measly 2Mbps, now to a lofty 6Mbps across the pacific, and as
much as 8Mbps between cities.
What we're now seeing (esp, with caching proxies, which we use
_extensively_), is that each increase in bandwidth makes it more likely
that a fast-delivering document (ie: from a fast site, or served from the
upstream cache on the other side of our link) will flood the link,
rendering other connections (non-proxied ftp, telnet, and even other http
connections) well..the word 'shitty' comes to mind (It's okay kids, I'm an
australian, and I used it in a complete sentence).
Bearing that in mind, I wouldn't mind seeing a 'virtual partitioning' for
squid bandwidth:
Squid is told something like: 'max_bandwidth 44Kbps' and makes sure that it
does not read more than 44kilobaud from document-fetches in any second
(neighbours don't count towards this number. They're presumably free, being
on the inside of the link we're trying to protect, we hope). Delivery of
documents is unaffected (let them get their own limiters, for gosh sakes).
Obviously, the specified marker will be exceeded as tcp queues fill up, but
after that point it should hover roughly around the specified number.
Not an ideal solution, but it wouldn't take much fiddling to get a number
that serves well-enough. Or you can measure with a cron job...or maybe even
reconfigure on the fly. Up to you, I guess...but without _something_ of
this order, every improvement in 'visible' bandwidth (wether it be actual
link widening, improvements in protocols, or caching technologies) actually
impacts more and more heavily on the response times for other connections,
during the duration of a request.
Squid might not be the ideal vehicle to test-drive this sort of thing, but
it's something to think about. Our current protocol family is doing the
best it can, but I don't think it was ever envisioned what sort of weird
shit we do with them today. I have high hopes that HTTP/1.1 and IPv6 will
help, but I won't bet any body-parts just yet.
(Cultural references: Yes, Australians really do swear. A lot. Over the
last two-hundred years or so, casual verbal profanity has wormed it's way
into our culture. I honestly can't think of any words at all that the
'average' australian would find offensive, or even notice in casual
conversation. Yes, there are always more extreme people (a bell-curve has
two ends, naturally) who find relatively inoffensive things shocking. I
expect everyone knows a few of them. And, yes...Our head of state swore at
the queen. Quite colourfully, and (I am told) strongly. And it was _at_
her, as well as _to_ her, though he didn't even notice until someone
pointed it out to him afterwards. Nor, I understand, did she take offense.
It isn't the first time we've done it to them, nor will it be the last.
Essentially, what I'm trying to say is: It's nearly 3am. I slip back into
my natural, comfortable modes of communication at this sort of hour, and as
such, I don't necessarily recheck everything I type to see if I might be
offending someone of country/religion/gender/species/other [your category
here]. My last such casual mailing list post (no stronger than this one)
earned me a set of asbestos underwear for the - rather stronger - flame
that I got in response. Sorry, guys. At this hour, brain-cells are at a
premium, the censors are in bed.)
----------
> From: Jonathan Larmour <JLarmour@origin-at.co.uk>
> To: Daniel O'Callaghan <danny@panda.hilink.com.au>; Ross Wheeler
<rossw@home.albury.net.au>
> Cc: Francis Vidal <francis@linux1.usls.edu>; squid-users@nlanr.net
> Subject: Re: What's the best configuration for this setup?
> Date: Saturday, November 23, 1996 12:50 AM
>
> At 23:11 22/11/96 +1100, Daniel O'Callaghan wrote:
> >This was discussed on freebsd-hackers not so long ago. It is possible
to
> >bandwidth limit connections by a simple
> >
> >while () {
> > read();
> > sleep();
> >}
>
> But won't TCP still fill up its receive queue as fast as it can? I
suppose
> you could _deliberately_ only read a certain number of bytes rather than
> everything available to keep the queue nearly full.
>
> But anyway determining how much to wait for is very awkward indeed as you
> have to know the physical bandwidth of the connection, which will change
all
> the time as use is shared. It would also probably lead to oscillating
around
> the correct value and for most people it is not acceptable for it to be
> slower than it could be - any bandwidth determining step would need many
Kb
> before coming even close to an estimate.
>
> Just my 2p
>
> Jonathan L.
> Origin IT Services Ltd., 323 Cambridge Science Park, Cambridge, England.
> Tel: +44 (1223) 423355 Fax: +44 (1223) 420724 E-mail: guess...
> -------[ Do not think that every sad-eyed woman has loved and lost...
]------
> -----------------------[ she may have got him. -Anon
]-----------------------
> These opinions are all my own fault.
Received on Sat Nov 23 1996 - 08:51:02 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:36 MST