Yes sorry i did mean cache_mem.
Well I took the plunge and decided to run a squid system with a small cache_mem
so the OS can manage objects as pages (FS buffer cache).
The system has been running for over 1 week now and I've noticed Squid
response is better, even though usage is on the rise!
I have no hard facts to give you because its a production system. If i had
the spare gear I would do my own comparison but at this stage I dont.
System Configuration is:
FreeBSD 2.2.6-RELEASE
256MB of physical memory
Pentium II 267MHz
Adaptec aic7880 Ultra SCSI onboard controller
2 * SEAGATE ST19101W 9GB disks (CACHE OBJECTS) (v.good avg seek times)
1 * SEAGATE ST32550N 2GB disk (OS+SWAP+CACHE LOGS)
The changes I made are:
cache_mem was reduced to 32MB from 160MB.
memory_pool's were turned off.
Here are some statistics (notice page faults are next to nothing)
General Runtime Stats
---------------------
Running Squid 1.2.beta22.
Start Time:
Tue, 07 Jul 1998 05:43:00 GMT
Current Time:
Thu, 16 Jul 1998 06:20:51 GMT
Connection information for squid:
Number of HTTP requests received: 2137259
Number of ICP messages received: 0
Number of ICP messages sent: 0
Number of queued ICP replies: 0
Request failure ratio: 0.00%
HTTP requests per minute: 164.4
ICP messages per minute: 0.0
Select loop called: 82958245 times, 9.401 ms avg
Cache information for squid:
Storage Swap size: 15516291 KB
Storage Mem size: 29424 KB
Storage LRU Expiration Age: 16.70 days
Mean Object Size: 13.90 KB
Requests given to unlinkd: 701805
Median Service Times (seconds) 5 min 60 min:
HTTP Requests (All): 0.00000 0.00000
Cache Misses: 0.94847 0.94847
Cache Hits: 0.00000 0.00000
Not-Modified Replies: 0.00000 0.00000
DNS Lookups: 0.96589 0.96589
ICP Queries: 0.00000 0.00000
Resource usage for squid:
UP Time: 779870.803 seconds
CPU Time: 16752.725 seconds
CPU Usage: 2.15%
Maximum Resident Size: 146772 KB
Page faults with physical i/o: 3
File descriptor usage for squid:
Maximum number of file descriptors: 8232
Largest file desc currently in use: 108
Number of file desc currently in use: 90
Available number of file descriptors: 8142
Reserved number of file descriptors: 100
Internal Data Structures:
1117171 StoreEntries
5654 StoreEntries with MemObjects
5650 StoreEntries with MemObject Data
5514 Hot Object Cache Items
1116282 Filemap bits set
1116241 on-disk objects
IP Cache Stats
--------------
IP Cache Statistics:
IPcache Entries: 15236
IPcache Requests: 2894328
IPcache Hits: 2532210
IPcache Pending Hits: 62161
IPcache Negative Hits: 0
IPcache Misses: 154582
Blocking calls to gethostbyname(): 0
Attempts to release locked entries: 0
pending queue length: 0
60 minute average of counters
-----------------------------
sample_start_time = 900571106.150572 (Thu, 16 Jul 1998 06:38:26 GMT)
sample_end_time = 900567467.560573 (Thu, 16 Jul 1998 05:37:47 GMT)
client_http.requests = 3.838025/sec
client_http.hits = 1.736662/sec
client_http.errors = 0.000000/sec
client_http.kbytes_in = 1.413734/sec
client_http.kbytes_out = 29.666162/sec
client_http.all_median_svc_time = 0.000000 seconds
client_http.miss_median_svc_time = 0.948473 seconds
client_http.nm_median_svc_time = 0.000000 seconds
client_http.hit_median_svc_time = 0.000000 seconds
server.all.requests = 2.263239/sec
server.all.errors = 0.000000/sec
server.all.kbytes_in = 22.179745/sec
server.all.kbytes_out = 1.075966/sec
server.http.requests = 2.254994/sec
server.http.errors = 0.000000/sec
server.http.kbytes_in = 18.143017/sec
server.http.kbytes_out = 1.068546/sec
server.ftp.requests = 0.001649/sec
server.ftp.errors = 0.000000/sec
server.ftp.kbytes_in = 4.014467/sec
server.ftp.kbytes_out = 0.000275/sec
server.other.requests = 0.006596/sec
server.other.errors = 0.000000/sec
server.other.kbytes_in = 0.022536/sec
server.other.kbytes_out = 0.007420/sec
icp.pkts_sent = 0.000000/sec
icp.pkts_recv = 0.000000/sec
icp.queries_sent = 0.000000/sec
icp.replies_sent = 0.000000/sec
icp.queries_recv = 0.000000/sec
icp.replies_recv = 0.000000/sec
icp.replies_queued = 0.000000/sec
icp.query_timeouts = 0.000000/sec
icp.kbytes_sent = 0.000000/sec
icp.kbytes_recv = 0.000000/sec
icp.q_kbytes_sent = 0.000000/sec
icp.r_kbytes_sent = 0.000000/sec
icp.q_kbytes_recv = 0.000000/sec
icp.r_kbytes_recv = 0.000000/sec
icp.query_median_svc_time = 0.000000 seconds
icp.reply_median_svc_time = 0.000000 seconds
dns.median_svc_time = 0.965893 seconds
unlink.requests = 0.899799/sec
page_faults = 0.000000/sec
select_loops = 51.727180/sec
cpu_time = 73.966939 seconds
wall_time = 3638.589999 seconds
cpu_usage = 2.032846%
Totals since cache startup
--------------------------
Totals since cache startup:
sample_time = 900571227.20570 (Thu, 16 Jul 1998 06:40:27 GMT)
client_http.requests = 2142278
client_http.hits = 908327
client_http.errors = 14
client_http.kbytes_in = 795004
client_http.kbytes_out = 16784500
server.all.requests = 1373193
server.all.errors = 0
server.all.kbytes_in = 12400341
server.all.kbytes_out = 643906
server.http.requests = 1366503
server.http.errors = 0
server.http.kbytes_in = 11307176
server.http.kbytes_out = 640799
server.ftp.requests = 1904
server.ftp.errors = 0
server.ftp.kbytes_in = 1075797
server.ftp.kbytes_out = 298
server.other.requests = 4786
server.other.errors = 0
server.other.kbytes_in = 17367
server.other.kbytes_out = 2808
icp.pkts_sent = 0
icp.pkts_recv = 0
icp.queries_sent = 0
icp.replies_sent = 0
icp.queries_recv = 0
icp.replies_recv = 0
icp.query_timeouts = 0
icp.replies_queued = 0
icp.kbytes_sent = 0
icp.kbytes_recv = 0
icp.q_kbytes_sent = 0
icp.r_kbytes_sent = 0
icp.q_kbytes_recv = 0
icp.r_kbytes_recv = 0
unlink.requests = 703055
page_faults = 3
select_loops = 83022531
cpu_time = 16779.338157
wall_time = 23.230012
One of the benefits of using the FS buffer cache is that a larger amount
of memory is controlled by the OS, and therefore unlikely to utilise swap, while
gaining maximum use of physical memory.
Current stats from top which illustrate this:
last pid: 27360; load averages: 0.08, 0.05, 0.06 16:44:01
28 processes: 1 running, 27 sleeping
CPU states: % user, % nice, % system, % interrupt, % idle
Mem: 149M Active, 1540K Inact, 28M Wired, 72M Cache, 8345K Buf, 784K Free
Swap: 256M Total, 1192K Used, 255M Free
Mem (149) + Wired (28) + Cache (72) + Buf (8) = 257 MB physical mem in use
Negligible swap in use.
To summarise, I am seeing better performance and stability by utilising
FreeBSD's FS buffer cache, rather than Squid's object cache.
-- Regards Peter Marelas On Sat, 4 Jul 1998, Alex Rousskov wrote: > Date: Sat, 4 Jul 1998 12:45:25 -0600 (MDT) > From: Alex Rousskov <rousskov@nlanr.net> > To: Peter Marelas <maral@phase-one.com.au> > Cc: squid-users@ircache.net > Subject: Re: Buffer Cache v's Memory Pool's > Resent-Date: Sat, 4 Jul 1998 11:46:45 -0700 (PDT) > Resent-From: squid-users@ircache.net > > On Sat, 4 Jul 1998, Peter Marelas wrote: > > > Has anyone done any detailed analysis which compares a file systems > > buffer cache versus squid memory pool's to cache recent objects? > > i.e. which provides better performance > > > > At the moment im utilising memory pool's and tossing up whether i should > > try buffer cache instead. > > Be sure you use recent version of 1.1 or 1.2. There was an old bug that > prevented hot memory buffer from working efficiently. I have no information > if the fixed version performs much better though. It might be the case that > hot memory buffer managed by LRU is a bad idea for proxies in general. > > Some memory buffer is needed for intransit objects, of course. > > As for the FS cache, I suspect it is important for two things: > 1) caching i-nodes for subdirectories to speedup access to files > 2) buffering swap-out requests (we usually see swap-out requests > being much faster than swap-ins) > Theoretically, you can estimate the amount of kernel memory needed to achieve > these two goals and tune your kernel respectfully. > > I am not aware of any detailed studies of this tuning tradeoff. Please keep > us posted if you find something interesting. > > Thank you, > > Alex. > > P.S. When talking about memory pools, I assume you are talking about > cache_mem, not mem_pools. :) >Received on Wed Jul 15 1998 - 23:56:49 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:08 MST