On Wednesday 30 April 2003 06.56, adam-s@pacbell.net wrote:
> Since I only have the one disk I figured it would be best to
> minimize large writes so it could handle all the other reads and
> writes it has to handle.
Actually disks are quite happy to process large writes.. a large write
does not take very much more time than a small write.
A reasonable cost estimate of disk I/O relative to the object size
would be something like:
SWAPOUT: 5 + (object size / 256KB)
SWAPIN: 2 + (object size / 256KB)
Numbers may wary a little depending on the type of drive, filesystem
etc.
> But using the fast disk to cache files that no
> other user will access just seems a waste of disk reads/writes,
> cpu, etc. I want to avoid that.
It is, if you reliably can figure out which large files for which
there will be no second request by any user (inlcuding the same user
who first requested it).
> My goal is to serve pages as quickly as possible so users won't
> complain. The connection was the bottle neck until we split the
> traffic between two T-1's (thanks to tcp_outgoing_address). I
> still want to review/tune squid as the Internet surfing from each
> "office" seems to be doubling each year.
Then set up a polygraph test to be able to stress test how your Squid
performs under load higher than you can have today.
> So you are confirming my question that if we have GDSF (as we do)
> then we probably don't need the "no_cache deny" directive?
GDSF will still cache them, but is a bit smarter than LRU to keep what
is interesting if your cache size is limited.
Regards
Henrik
-- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, info@marasystems.comReceived on Wed Apr 30 2003 - 02:46:51 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:15:36 MST