Hi all,
Apologies if this is in a FAQ somewhere - I had a quick look and didn't
see any answers...
I'm running Squid 1.1.22 on Redhat 5.2 (kernel 2.0.36) and using
Netscape 3.01 on a PC (Win NT). I see a lot of broken images when
fetching pages, along with 'ERR_READ_ERROR' in the Squid logs, but when
I reload the page the images are fetched fine.
Is this just some strange Netscape error? (seen lots of references to
various browsers not handling proxies well)
Is it likely that I just need to tweak some timeout settings in my
squid.conf file?
Or is it more likely that it's not a throughput-related problem at all
but down to the speed of the proxy machine itself? (P166, 64MB ram,
cache on a seperate IDE disk)
On a related note, can I tell how long Squid is taking to fetch pages,
how long it's taking to forward those pages on to a client, and how long
is spent processing on the server itself?? (ie. some sort of breakdown
as to what's going on).
Unfortunately I'm running on a token ring network so can't load-balance
across multiple cards or anything, but I should be able to throw more
memory/CPU at the proxy machine some time and transfer everything to
SCSI disks (hmm, can I just copy the existing cache across to another
disk, or does Squid tie itself in at the inode level rather than some
sort of user-filesystem level?)
ta for any help! (And yes, I apologise as I realise all these questions
are probably answered somewhere else! :)
Jules
Received on Mon May 24 1999 - 08:50:20 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:46:24 MST