Gideon Glass wrote:
>
>
>
> >
> > The pII box would probably do better than the single CacheRaQ. It has
> > more disk bandwidth so disk writes would go faster, leaving the disks
> > less busy and hence more ready to handle disk reads. The added memory
> > would also help -- more object data would be cached in main memory.
> > However, for disk reads, including disk reads necessary for open
> > calls, squid 1.1.x will block, so more spindles isn't necessarily
> > going to buy you much. My understanding is that with Squid 2 and
> > async I/O, this is no longer a problem.
>
> async I/O as in an async filesystem? Is that a reality on linux yet?
>
> I believe the squid async I/O stuff works fine on linux. I haven't
> tried it myself since I've been busy with other things, but I am
> pretty sure that the threaded version works fine. More info is at
>
> http://squid.nlanr.net/Squid/FAQ/FAQ-19.html#ss19.4
--enable-async-io under linux 2.0.35 works just fine. If you have a
late-model 'canon' distribution (we use Debian hamm frozen), everything
you need for threads is installed as a part of the base system, and it
just goes. This week (or next) I've got a whole series of performance
metrics to do on squid 2.0, including on the impact generally of
async-io and threads.
> Ultimitley, the plan is to grow a "farm" of Cobalts. I like the idea of
> multiple processors/machines for speed and redundancy. Could you comment
> for example, would 2 CacheRaQ's of the above config do better than my
> squid box? I know my RAID0 Ultra/Wide disk array is superior to an
>
> Assuming you're running squid 1.1.x, I think 2 cacheraqs would do
> better than the single PII due to synchronous opens/reads. I don't
> know what the answer is if you run squid 2. Those 4 disks may still
> be better than 2 disks + 2 cpus, but with 3 cacheraqs it seems less
> likely and with 4 I really doubt it.
Multi-processing is still a bit of a win for squid 1.1, as the
redirector and dnsservers can get farmed off to the alternate CPU. Not
as big a win as squid2.0 gets, but it can be significant nonetheless.
> We see about 43% on our box (above squid config). What I am concerned
> with mostly, is disk thrash and latency. Do you feel the single disk of
> the CacheRaQ is going to lead to hellacious disk thrashing that will cause
> pages to be served with high latency?
>
> With high throughput yes, you could become disk bound and start to see
> high latency. The stats on the cacheqube/cacheraq give you hit vs miss
> latency, and also if it's really bad, you can tell from direct
> experience with a browser. If this happens, the solution is to
> increase the number of cache servers.
Mmm. Yes. Passing your logs off to a different disk, and even a
different controller can take the pain away. A single IDE disk is
probably good enough up to about a 1Mbps link. Beyond that you probably
want to spread your cache across multiple cables, or think about a good
set of SCSI disks (possibly also with multiple controllers).
D
PS: Yeah, I've been absent for a while. I've travelled many thousands of
kilometres in the last six weeks, and had email chasing me all over.
-- -----BEGIN GEEK CODE BLOCK----- Version: 3.1 GAT d- s++: a C++++$ UL++++B+++S+++C++H++U++V+++$ P+++$ L+++ E- W+++(--)$ N++ w++$>--- t+ 5++ X+() R+ tv b++++ DI+++ e- h-@ ------END GEEK CODE BLOCK------Received on Sun Oct 04 1998 - 02:50:51 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:19 MST