On Tue, 27 Oct 1998, tom minchin wrote:
> This is why you have several caches (and it's about the only time the
> CPU is every utilised to any degree). To resolve the incredibly slow
> performance* of 1.NOVM caches on our old systems (P200MMXs/196M of RAM)
> I used the Linux local port redirector. When the load average exceeded
> a certain amount (say 1.1) for more than 5 minutes it would turn the
> http_port off (established connections would remain in place).
>
> Clients would get a connection refused and as they are using Netscape's
> auto config (or using the round robin A record) they'd fail over to the
> next in line cache.
>
> When the cache was unloaded again (eg after recovering from a crash) the
> load would drop, a cron job would nail back up the http_port for clients
> to use again.
>
> However, since our P2/512M systems, reloading is about 10 times faster.
> The load checker rarely (months go past) has to take down the http_port.
>
> * normal load was ok, just expiring objects or rebuilding from a crash.
Mmm. We have 4 caches. However you would be surprised how many broken
browsers there are out there. The common fault, when using round robin
DNS as we are, is that they resolve the name, pick the top A record and
then use that for the rest of that browser's session.
You also would have to turn off peering, since there isn't yet a way to
timeout a request from a sibling which you know has the object you want.
Say a peer cache requests an object from you but it takes 6 seconds to
deliver it. When that peer could have got it direct in the same or less
time you are degrading the performance of all the caches in your peer
group.
John
Received on Tue Oct 27 1998 - 10:15:19 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:49 MST