>>> "Andres Kroonmaa" wrote
> > [running out of FDs is closer for us than running out of memory]
> I see, pure you ;) But isn't it better to setup several caches with
> round-robin dns? You are reaching your FD limits anyway... This 50%
> will not save you forever.
Yes and no. If we suddenly had to switch to NOVM, we'd be in real trouble
with the number of FDs available. Boosting the number allows us to extend
the amount an individual machine can do before it hits the performance
wall - and that's a Good Thing.
(and if anyone out there has definite experiences with running any version
of Solaris with > 4096 filedescriptors, send me mail, please! :)
> Wait a minute... It doesn't stop, it just doesn't accept any further
> connections until some of the sessions are completed and FD's are released.
> Those that are connected get their stuff as fast as network can not as
> slowly as paging squid can do. Its not a matter of slow/stop, its a matter
> of fast/later...
*shrug* Running out of memory, still accepts connections, vs, running out
of filedescriptors, when the client machines get connections timing out and
all sorts of unpleasantness. Back when we had the select()-imposed limit of
1024 file descriptors, it was most unpleasant to hit that wall...
But then, we're running it with the idea that paging is very very bad,
and so stuff as much memory into it as necessary...
Anthony
Received on Tue Jul 29 2003 - 13:15:41 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:18 MST