Hello all,
I'm running several Squid boxes as reverse proxies, the problem i'm seeing
is when there are a high number of connections in the region of 80,000 per
Squid at peak I'm getting 1,000's of TCP_MISS for the same URL hitting the
back end servers, things do eventually sort themselves out. Is there any way
to prevent such behaviour? I assumed with 'collapsed_forwarding on' it would
only send a single request to the backend for new content?
Other than the odd instance where this problem occurs the servers run
between 90 - 99% hit ratio and have been load tested with max bandwidth
922Mbps @ 68,000 connections, system resources are readily available with
Squid only ever using 30% CPU at peaks and over 4.5GB RAM free. The systems
also have 100,000 file descriptors available.
Any help / tips will be much appreciated.
Thanks,
Tookers
Squid Conf:-
http_port x.x.x.x:80 act-as-origin vhost defaultsite=x.x.com:80
httpd_suppress_version_string on
icp_port 0
dns_nameservers 127.0.0.1
dns_testnames x.x.com
visible_hostname squid1
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir aufs /var/squid/var/cache 4096 16 256
cache_mem 512 MB
accept_filter httpready
memory_pools on
memory_pools_limit 0
maximum_object_size_in_memory 2048 KB
collapsed_forwarding on
negative_ttl 5 minutes
server_persistent_connections on
client_persistent_connections on
half_closed_clients off
-- View this message in context: http://www.nabble.com/Reverse-Proxy%2C-sporadic-TCP_MISS-tp25659879p25659879.html Sent from the Squid - Users mailing list archive at Nabble.com.Received on Tue Sep 29 2009 - 09:41:35 MDT
This archive was generated by hypermail 2.2.0 : Tue Sep 29 2009 - 12:00:03 MDT