the premise of the question was a single saturated link feeding the
squid cache and the dns requests/replies being dropped due to queue
overruns (or RED or whatever.. congestion drops).. one would think
this would be a uniform droppage, not URL dependent as the congestion
is local.. the suggestion I was responding to indicated that a
solution to this problem was essentially reducing the timeout and just
asking the DNS question more often, which doesn't make any sense at
all.. you've got a single overburdened link and that suggestion just
adds more work to it.. what you need to do is either put the DNS under
congestion control too or move DNS onto some kind of reserved or
diffserved network so that doesn't happen to it..
so it's not related to parts of the DNS tree being congested, its
really a local issue.
-P
In a previous episode Henrik Nordstrom said...
::
:: Patrick McManus wrote:
::
:: > it appears as though you're recommending injecting traffic faster into an
:: > already congested network..
::
:: Not really.
::
:: The problem is that if some often used part of the DNS tree is congested
:: then Squids dnsserver processes will also be congested and to your Squid
:: users the whole internet seems randomly congested when visiting new
:: addresses, even those parts where there is no DNS congestion.
::
:: Squid is smart enough to not have more than one outstanding DNS query
:: for each host name at a time (600 seconds timeout), so it does not
:: really inject a noticeable amount of requests to the congested part. It
:: only allows for a greater margin to be able to continue processing if
:: some parts of the internet gets congested.
::
:: If the congestion is at the local DNS server then you have problems.
::
:: --
:: Henrik Nordstrom
:: Spare time Squid hacker
::
Received on Mon Jun 28 1999 - 16:04:32 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:47:02 MST