In my performance optimizations of squid I didn't see any benefit to
increasing Linux kernel network buffers. Those are mostly useful for
high-latency (long distance) connections, and I was concentrating on
high speed LAN accesses. I did see a huge increase in performance by
making sure that squid's maximum_object_size_in_memory was small; I set
it at 128KB. The Linux filesystem cache, which as far as I know can
take advantage of all available memory automatically, is much faster
than squid's memory cache for large and even moderately sized objects.
How much throughput are you able to get through the 4 Gbits of network
connections with a single squid?
- Dave Dykstra
On Mon, Jun 11, 2007 at 06:13:32PM -0700, Michael Puckett wrote:
> My squid application is doing large file transfers only. We have
> (relatively)few clients doing (relatively)few transfers of very large
> files. The server is configured to have 16 or 32GB of memory and is
> serving 3 Gbit NICs to the clients downstream and 1 Gbit NIC upstream.
> We wish to optimize the performance around these large file transfers
> and desire to run large I/O buffers to the networks and the disk. Is
> there a tunable buffer size parameter that I can set to increase the
> network and disk buffer sizes?
>
> Regards
>
> -mikep
Received on Tue Jun 12 2007 - 15:01:05 MDT
This archive was generated by hypermail pre-2.1.9 : Sun Jul 01 2007 - 12:00:04 MDT