Henrik Nordstrom wrote:
> 
> 
> There can be a number of reasons to this
> 
> a) Your squid is really running out of file descriptors. This will 
> happen if you trow a couple of thousand users at a Squid cache without 
> tuning the number of filedescriptors (see FAQ).
> 
> b) Your Squid is overloaded, or most likely your cache_dir is. If using 
> the default "ufs" cache_dir type then speed will be very much limited by 
> the speed of your harddrive, and when this limit is reached performance 
> quickly spirals down. Use "aufs/diskd" cache_dir types, and design the 
> hardware correctly for the load you are planning.
> 
> Regards
> Henrik
> 
> 
I have read the FAQ.
[root@cache1 squid]# ulimit -n
1024
[root@cache1 root]# cat /proc/sys/fs/file-max
  104854
[root@cache1 root]# ulimit -HSn 100000
[root@cache1 root]# ulimit -n
100000
Then I edit the /etc/squid/squid.conf and change the max_open_disk_fds:
  max_open_disk_fds 100000
[root@cache1 root]# /etc/init.d/squid restart
[root@cache1 root]# cat /var/log/squid/cache.log |grep descriptors
  2004/10/14 16:00:18| With 1024 file descriptors
I understand that squid needs to be recompiled, could someone help me 
with the parameters for recompiling squid? I'm trying to give 
transparent cache for a lot of users.
In the mean while, I reduced the users conected to squid, and the 
"WARNING! Your cache is running out of filedescriptors" doesn't appeared 
anymore.... but I still have users that don't get a response from the 
transparent cache.....
Thanks,
Alejandro
Received on Thu Oct 14 2004 - 15:11:23 MDT
This archive was generated by hypermail pre-2.1.9 : Mon Nov 01 2004 - 12:00:02 MST