hello list,
Iam running a website and have setup 3 squidservers as reverse proxy's 
to handle the images on the website.
And before I try to tweak even more i am wondering what is considered 
good performance in requests/min.
some basic stats to get an idea:
- only images files are servers
- avarage size 40KB
- possible number of files somewhere between 10 and 15 million (and 
growing).
- the variaty of files thats accessed? ...
I got these stats from a squid servers thats running for 2/3 days now.
Internal Data Structures:
        2024476 StoreEntries
        146737 StoreEntries with MemObjects
        146721 Hot Object Cache Items
        2000067 on-disk objects
Is it safe to assume that the number of images actually accessed is 
about 2million?
on our dual  xeon with 4GB ram sata disk servers i can get about 250 
hits/seconds
on our dual xeon 8 GB scsi server i can get about 550 hits/seconds
are these decent numbers?
i'am running aufs on the 8GB server, and diskd on the other servers.
does that contribute to the big difference or is it mainly the memory 
and disk speed.
I think that the variaty of files accessed by the clients is getting to 
big (especially during peak hours) for the squid servers to cache 
efficiently. And i am hoping that its possible to distribute the variaty 
over the squid servers. So that during normal operations eachs squid 
servers would only have to serve a third of the  2 million files.
Do you have some good idea's about how to achieve this?
Is there a way to have some kind of distribution based on the url?
Iam hoping this is possible without rewriting the webapplication
and so that a failure of 1 servers would go unnoticed for the public.
Hoping to hear some good idea's.
Thanx in advance
Jos
Received on Sat Jul 02 2005 - 16:57:22 MDT
This archive was generated by hypermail pre-2.1.9 : Mon Aug 01 2005 - 12:00:02 MDT