> I am thinking of setting up squid for caching having worked with it in the
> past, but Im unclear about some sizing issue for a large installation.
> We are a broadband (cable modem) based ISP with a some dialups too. My
> immediate need is to service around 50 - 60 http requests per second.
use one separate drive for this (count about 100 req/drive)
> This got me to the big figure of 4,320,000 per day, which I didnt not find
> in the JANET artice by Martin Hamilton on the FAQ. Also with an average 8k
> object size, I gather id need 32GB hard disk and 512MB ram minimum.
why 512MB RAM?
256MB is enough for 32GB cache.
> We currently run on a 14 Mbps Internet link, this will be up to 30Mbps by
> the end of the year, so the requests per second will be increasing by quite
> an amount !
> Im estimating 64-80GB hard disk and 1GB RAM minimum for this.
assuming 8kB average obj. size it's 224 requests for 14Mbps link and about
400 including hits. as real avg. object size is 15-16 kB (on my 11GB cache
16 kB is an average) you could count 200req/s
> My questions:
>
> At what point do I need to consider clustering ? (Note the above is at one
> single physical location.)
but 2 fast 7200 IBM IDE drives (30GB), 512 MB RAM, make mirrored 2GB
partition for OS, logs, small swap etc and 28GB for squid.
don't forget to mount -o noatime , use 1 inode/8kB and 2kB blocks in linux
and 8/1 blocks in *BSD.
is you can choose system, choose *BSD (i use netbsd). linux isn't usable
on high load
Received on Wed May 09 2001 - 11:08:27 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:59:53 MST