Hi,
We are having problems with our Squid servers during traffic peaks. We had
problems in the past and we got different error such as "Your cache is
running out of filedescriptors", syncookies errors, etc. but nowadays we
have optimized that and we are not getting those errors anymore. The problem
is that the servers, which many of them are different in resources and in
two different datacenters (which are all running squid as a reverse proxy
caching contents from several webservers in other datacenters), during big
traffic peaks all of them fail to deliver content (html pages, js files and
css files gziped and non gziped as well as images) and we do not see any
error at all. The more connections/requests, the highest is the percentage
of clients that fails to get the content. So we are tring to find out where
is the bottleneck. Is Squid unable to deal with more than X connections per
second or any other bottleneck? I think the bottleneck starts to fail when
there is around 20,000 connections to each server.
Thank you in advance
-- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650.html Sent from the Squid - Users mailing list archive at Nabble.com.Received on Sat Feb 23 2013 - 02:59:29 MST
This archive was generated by hypermail 2.2.0 : Sun Feb 24 2013 - 12:00:05 MST