--MimeMultipartBoundary
Content-Type: text/plain; charset=us-ascii
Hiya,
> This is not all that easy. For every open tcp session, it takes many
> reads/writes to fulfill even the smallest requests. For NOVM type of
> squid, there are at least 4 open files for each open client session.
> If we do not service at least 4 reads/writes per each new accept, we
> tend to run out of files and slow down service times for already
> pending sessions. In fact, this is the only protection against DoS in
> squid, I guess. This "poll incoming every few R/W" was introduced to
> avoid TCP queues blowing because they are not emptied fast enough,
> causing dropped tcp packets and delays.
Your above explanation assumes that your external bandwidth is
not capable of supporting what your cache can accept and pump through it.
For us, our external bandwidth is capable, therefore we can drive
exceptional amounts of traffic through our caches and NOT see TCP queues
or FD limits blowing.
The ultimate thing we need is a configurable option. Higher speed
external bandwidth makes extremely large differences on what type of
constraints a cache will run into. Slow external usually means FD queues
start to blow out, and so it's beneficial to slow down service before
things get out of hand. For us, we're pumping through 70+ TCP hits/sec,
including 40 TCP hits/sec going external and still only using 1500 FDs.
In the US, you'd probably only see 400 FD's in use for the same traffic
volume. (We're in Australia).
> In general, this is a matter of preference. For some, servicing ready
> FD's of open sessions is much higher priority than answering ICP for
> peers, for some it might be the other way. But, if under high loads
> ICP response times get high, this is a good indicator that remote peer
> is (over)loaded, and peer selection algoritm may select another less
> loaded peer. By polling ICP socket "on every corner" you get perfect
> ICP response times, but in a way you "lie" to a remote peer that you
> are almost idling while after it comes to fetch an object over tcp it
> might encounter awful service times of overloaded squid.
I'm not after perfect response times. I'm happy with the ICP
slowing down a tad. What I'm not happy with is when the default is such
that the caches ARE practically idling due to circumstances, AND ICP times
are blowing out. If it's a matter of polling a touch faster to get the
caches respond roughly to what they can handle, I'm going to do it.
> I recall that initial patch for this was taking into account number
> of ready incoming sockets last seen, and if it was > 0 then did set
> repoll frequency to every 2 ordinary FD's, and if it was 0 then every
> 8. Not perfect, but more flexible.
True. I accept that is a decent approach. I opted for my patch
because our caches pretty much run at high load ALL the time. In this
case, my patch didn't take into account different cases. The above method
which you describe was not present in the code but is a better all-round
solution.
> Perhaps there should be some selftuning variable that changes according
> to load pattern, and may be affected by configurable preference, but
> I personally really don't like the idea of giving incoming sockets
> almost infinite preference to those FD's yet not serviced.
Me either. But I do want to give them enough preference such
that ICP runs fast enough to reflect the general cache performance.
> BTW, you didn't mention, what was the very slow ICP responses in real
> time measures. If more than 1 sek, then rude math shows that it takes
> 1000ms/16 = 62.5ms to service any read/write call on average which is
> IMHO showing much more serious problems somewhere else.
We were seeing up to 2.5 seconds. Solaris has the ability to set
a fairly large UDP receive buffer. Our was set to 256K. This allows for
about 600 ICP responses to queue up, or at 250 ICP requests/sec, about 2.5
seconds given squid not sucking up UDP fast enough. That's where the
problem was. Squid was processing the incoming UDP queue quickly enough.
ICP packets were otherwise simply dropping which didn't fit in the queue.
Stew.
-- Stewart Forster (Snr. Development Engineer) connect.com.au pty ltd, Level 9, 114 Albert Rd, Sth Melbourne, VIC 3205, Aust. Email: slf@connect.com.au Phone: +61 3 9251-3684 Fax: +61 3 9251-3666 --MimeMultipartBoundary--Received on Tue Jul 29 2003 - 13:15:50 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:47 MST