Ross Wheeler wrote:
>
> On Thu, 28 Jan 1999, Gideon Glass wrote:
>
> > Folks,
> >
> > Some of our Cobalt CacheRaQ customers running various Squid 2 versions
> > are seeing some strange bandwidth spikes. I'm wondering if anyone
> > else has seen these problems and if there's a solution (or if not
> > where to start hacking for a solution).
>
> [snip]
>
> > During a spike, what seems to be happening is that Squid downloads one
> > or more large HTTP objects very rapidly. In one ~2 second interval,
> > tcpdump showed ~380KB on one TCP connection from a remote server to
> > squid. In the same interval a single dialin client got ~15KB on its
> > connection from the cache.
>
> I raised this about 18 months ago, and to the best of my knowledge, there
> has been little if any work done on it. In my part of the world, bandwidth
> is *much* more expensive than many other places, and massive pipes are
> simply not viable. Thus, when a dial-up customer at say, 28K8, requests a
> large file through squid, and especially if that file is at a well
> connected site (say, in the proxy at an upstream provider), squid will
> suck the file down ASAP. This has many detrimental side effects, including
> general degradation of the link to all other (local) network users.
>
> I put forward an idea that I'd like to see some throttling of squid to
> suck data at only marginally higher rate than the client is taking the
> data. So, someone with a 14K4 modem pulling a 30Mb file, does NOT NEED
> squid to pull that 30Mb file at maximum rate and kill your link, it simply
> needs to pull at something a little over 14K4. If someone else comes and
> starts pulling the same file, suck harder to keep up with them (if
> possible). Perhaps, rather than trying to match bandwidth, squid could
> simply "read ahead" by some arbitary amount - enough to keep the client
> fed flat out, whilst being "nice" to the available bandwidth.
>
> I have not looked into the code to see how difficult this would be, but if
> said quickly, it seems to be pretty straight-forward. Rather than fetching
> as fast as possible, fetch only when the read-ahead buffer is under the
> threshold value.
Squid already does this (sort of). It reads ahead an amount (I don't
have the exact amount in my head) then it throttles back to
approximately the client read-speed. (Dig back through the list archive
and look for 'deferred reads')
D
Received on Sun Jan 31 1999 - 15:10:49 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:11 MST