have been testing the cache running strictly off memory and no disk fs.
When Squid is set to use 16gb memory, connections are very fast.
When I bumped the memory use to anything near 30 gb, latency becomes an
issue.
The cache doesn't report any errors in cache.log and watching the
access.log I don't see anything
out of the ordinary.
I never have seen the cachemgr report any "Store Disk file open" values.
Again, this is a dedicated squid server with 32gb ram and 4 dedicated
280+gb 15k sata drives
running transparently using wccp.
Presently the 4 drives are partitioned in 1/2, allocated to squid0-squid7.
Any recommendations on repartitioning the 4 drives for quicker reads?
here's a general runtime report fwiw:
Squid Object Cache: Version 2.7.STABLE4-20080926
Start Time: Tue, 07 Oct 2008 18:21:10 GMT
Current Time: Wed, 15 Oct 2008 15:51:20 GMT
Connection information for squid:
Number of clients accessing cache: 5393
Number of HTTP requests received: 220293340
Number of ICP messages received: 0
Number of ICP messages sent: 0
Number of queued ICP replies: 0
Request failure ratio: 0.00
Average HTTP requests per minute since start: 19374.7
Average ICP messages per minute since start: 0.0
Select loop called: -454862289 times, -1.500 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 32.8%, 60min: 32.3%
Byte Hit Ratios: 5min: 14.1%, 60min: 15.9%
Request Memory Hit Ratios: 5min: 56.9%, 60min: 56.5%
Request Disk Hit Ratios: 5min: 0.1%, 60min: 0.1%
Storage Swap size: 0 KB
Storage Mem size: 16777692 KB
Mean Object Size: 0.00 KB
Requests given to unlinkd: 0
Median Service Times (seconds) 5 min 60 min:
HTTP Requests (All): 0.06640 0.06286
Cache Misses: 0.10281 0.09736
Cache Hits: 0.00000 0.00000
Near Hits: 0.05046 0.05046
Not-Modified Replies: 0.00000 0.00000
DNS Lookups: 0.04854 0.02941
ICP Queries: 0.00000 0.00000
Resource usage for squid:
UP Time: 682209.558 seconds
CPU Time: 133086.580 seconds
CPU Usage: 19.51%
CPU Usage, 5 minute avg: 15.85%
CPU Usage, 60 minute avg: 15.31%
Process Data Segment Size via sbrk(): 18876028 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 4381750
Memory usage for squid via mallinfo():
Total space in arena: -2095492 KB
Ordinary blocks: 2096010 KB 22166 blks
Small blocks: 0 KB 0 blks
Holding blocks: 67624 KB 12 blks
Free Small blocks: 0 KB
Free Ordinary blocks: 2801 KB
Total in use: -2030670 KB 100%
Total free: 2801 KB 0%
Total size: -2027868 KB
Memory accounted for:
Total accounted: 18253692 KB
memPoolAlloc calls: 4284254700
memPoolFree calls: 4256906586
File descriptor usage for squid:
Maximum number of file descriptors: 32000
Largest file desc currently in use: 8829
Number of file desc currently in use: 5500
Files queued for open: 0
Available number of file descriptors: 26500
Reserved number of file descriptors: 100
Store Disk files open: 0
IO loop method: epoll
Internal Data Structures:
932205 StoreEntries
932205 StoreEntries with MemObjects
931937 Hot Object Cache Items
0 on-disk objects
thanks
-Ryan
Ryan Thoryk wrote:
> The latency should be your disk caches, and I'm also assuming that the
> sheer size of them could be contributing to it. Also remember that
> RAID-0 (or similar) won't help improve performance (since it's access
> times that you need, not throughput). We have a much smaller load here
> (one machine peaks at around 1100 users), and switching from 7200rpm
> SATA drives to 15k SCSI drives solved a lot of latency issues.
>
> One thing you can do is use the max_open_disk_fds value; we found that
> our SATA machines had major performance issues when over 50 file
> descriptors were currently open. That parameter tells Squid to bypass
> the disk cache if the number of open fd's is over that value (that would
> definitely help during peak times). You can find the current number of
> open fd's in the "Store Disk file open" value in your cachemgr general
> runtime page.
>
> Also I'd recommend greatly decreasing the size of your disk caches, and
> increasing the cache_mem value (since you have 32 gigs of RAM, I'd
> probably try to get the Squid process up to around 30 gigs).
>
> Ryan Thoryk
>
> Ryan Goddard wrote:
>> Thanks for the response, Adrian.
>> Is recompile required to change to internal DNS?
>> I've disabled ECN, pmtu_disc and mtu_probing.
>> cache_dir is as follows:
>> (recommended by Henrik)
>>> cache_dir aufs /squid0 125000 128 256 cache_dir aufs /squid1 125000
>>> 128 256
>>> cache_dir aufs /squid2 125000 128 256
>>> cache_dir aufs /squid3 125000 128 256
>>> cache_dir aufs /squid4 125000 128 256
>>> cache_dir aufs /squid5 125000 128 256
>>> cache_dir aufs /squid6 125000 128 256
>>> cache_dir aufs /squid7 125000 128 256
>> No peak data available, here's some pre-peak data:
>> Cache Manager menu
>> 5-MINUTE AVERAGE
>> sample_start_time = 1222199580.85434 (Tue, 23 Sep 2008 19:53:00 GMT)
>> sample_end_time = 1222199905.507274 (Tue, 23 Sep 2008 19:58:25 GMT)
>> client_http.requests = 268.239526/sec
>> client_http.hits = 111.741117/sec
>> client_http.errors = 0.000000/sec
>> IOSTAT shows lots of idle time - I'm unclear what you mean by
>> "profiling" ?
>> Also, have not tried running w/out any cache - can you explain
>> how this is done?
>>
>> appreciate the assistance.
>> -Ryan
>
>
Received on Wed Oct 15 2008 - 15:52:47 MDT
This archive was generated by hypermail 2.2.0 : Wed Oct 15 2008 - 12:00:03 MDT