Pekka.Jarvelainen@voxopm.minedu.fi writes:
>
>I'm started to run squid-1.1.beta13 yesterday with clean cache
>and it worked fine yesterday:
>% zcat access.log.0.gz | nawk -f access-times.awk
> local cached remote cached remote proxied no proxy,cache other
>Nummer: 114132 0 0 253682 28270
> 28.8% 0.0% 0.0% 64.0% 7.1%
>Zeit: 0.8s 0.0s 0.0s 4.3s 16.3s
>
>But today squid uses all CPU (233 MHz alpha) and doesn't work so fast:
>(not a full day stats)
> local cached remote cached remote proxied no proxy,cache other
>Nummer: 65966 0 0 114561 12878
> 34.1% 0.0% 0.0% 59.2% 6.7%
>Zeit: 2.0s 0.0s 0.0s 5.7s 16.2s
>
>
>There is enough (256MB) memory: squid uses 100MB.
>
>What should I do? Do sources have some hashtable settings, which are too small
>or what does store_objects_per_bucket mean in squid.conf?
Squid estimates how many store buckets to use based on your 'cache_swap'
setting. It asssumes that the average object size is 20k. If you
decrease 'store_objects_per_bucket' then you increase the total number
of buckets, and therefore the rate at which buckets are "cleaned out."
But I don't think that is your problem. You may want to try building
squid with GNU libmalloc. Brian Denehy <B-Denehy@adfa.oz.au> reported
a significant performance improvement on DEC Alpha when using GNU malloc.
Duane W.
Received on Wed Nov 06 1996 - 08:34:57 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:30 MST