Joey Coco wrote:
> After our peak time today, I changed the file descriptor max from 1024 ->
> 4096 to see what happens tomorrow..
1024 filedescriptor feels a bit on the low side for such request rates,
but might work if all your clients and the content requestes is on low
latency links.
If filedescriptors is the problem then Squid will complain loudly in
cache.log.
> Only errors i've noticed when I run debug to level 9, is some weird
> forwarding loop detected messages for some broken urls. Not
> enough to break the box I think? Clients with the "code red" virus cause
> these same messages, but I used ACL to block those requests.
Forwarding loops caused by peerings should not normally break the box.
But if you are running a transparent proxy or accelerator and this setup
causes a loop then you might get into trouble.
> I'm not really sure what the right combo of servers + hardware is gonna be
> to route 100mBit/sec of traffic through squid.
A cluster of about 6 boxes with at least 3 drives each and sufficient
amount of memory to sustain your cache size and networking load should
be able to cope I think, plus some boxes for margin.
Regards
Henrik
Received on Thu May 30 2002 - 02:51:22 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:15:31 MST