On what OS?
and also what is the output of the ulimit -Ha and ulimit -Sa
Eliezer
On 6/11/2013 6:32 PM, Mike Mitchell wrote:
> I dropped the cache size to 150 GB instead of 300 GB. Cached object count dropped
> from ~7 million to ~3.5 million. After a week I saw one occurrence of the same problem.
> CPU usage climbed steadily over 4 hours from <10% to 100%, then squid became
> unresponsive for 20 minutes. After that it picked up as if nothing had happened -- no
> error messages in any logs, no restarts, no core dumps.
>
> I'm now testing again using version 3.3.5-20130607-r12573 instead of 3.2.11-20130524-r11822.
> I've left everything else the same, with the cache size still at 150 GB.
>
> Mike Mitchell
>
> On 30/05/2013 08:43:24 -0700, Ron Wheeler wrote:
>
>> Some ideas here.
>> http://www.freeproxies.org/blog/2007/10/03/squid-cache-disk-io-performance-enhancements/
>> http://www.gcsdstaff.org/roodhouse/?p=2784
>>
>>
>> You might try dropping your disk cache to 50Gb and see what happens.
>>
>> I am not sure that caching 7 Million pages gives you much of an advantage over 1 million. The 1,000,001th most > popular page probably does not come up that often and by the time you get down to a page that is 7,000,000 in the list of most accessed pages, you are not seeing much demand for that page.
>>
>> Probably most of the cache is just accessed once.
>>
>> Your cache_mem looks low but is not related to your problem but would improve performance a lot. Getting a few > thousand of the most active pages in memory is worth a lot more than 6 million of the least active pages sitting on a disk.
>>
>>
>> I am not a big squid expert but have run squid for a long time.
>>
>> Ron
>
Received on Tue Jun 11 2013 - 17:54:13 MDT
This archive was generated by hypermail 2.2.0 : Wed Jun 12 2013 - 12:00:17 MDT