Mark Visser wrote:
>
> I plugged in two extra harddisks in our Sun Sparc 470 running Squid.
...
> No.1 is a disk of 869 mb (of which 750 is cache), while no. 2 and 3 are
> 550 mb. (1 IPI, 2/3 SCSI)
>
> And the question is now:
> 'How do i correctly define my squid cache dirs now in the config ?'
...
> cache_swap 1750 (approx total of the three disks)
...
> I read somewhere earlier this week that all cache dirs MUST be the same
> size.
Well, the dumb advice is easy:
cache_dir .../disk1/dir1
cache_dir .../disk1/dir2
cache_dir .../disk1/dir3
cache_dir .../disk2/dir1
cache_dir .../disk2/dir2
cache_dir .../disk3/dir1
cache_dir .../disk3/dir2
...
cache_swap 1750
You'll get exactly 1750MB cache, each dir will be 250MB.
All You have to do is to kill Squid(completely), remove currently
existing on disk1 swap directories, create new swap dirs and rerun
Squid. Of course, all cache contents will be lost.
Another solution is to install package letting concatenate many disks
into one logical device(like MD v0.35 for Linux). The cache contents
will be lost either.
Now I'd like to open a new discussion(aka proposals to Squid v2.0).
So welcome, everybody having no weekends!
Yes, as we all can see, the problem described above becomes most typical
and actual problem today. And, as I know, there is no proper solution
to it. In few words: whenever a cache's geometry(let's call it like
that) is changed, its(cache) contents is lost. A really bad thing.
Second, that's a bitter truth: "all cache dirs MUST be the same size".
So, no way, neither control over the dir's size, nor scalable solution.
I'm wondering, why? Is it so problematic job to write a process that
may read a new cache geometry from squid.conf and while running in a
background will rebuild swap's structure. I think it's worthwhile to
keep couple of hours the performance degradation than couple of days
(weeks) waiting untill cache will again become useable. The same is
for swap dir's sizes. What's about tags like:
# cache_dir_AAAA_size 2000
# cache_dir_AAAA_low 85
# cache_dir_AAAA_high 95
End of subject.
The other problem has been arised by Anthony DeBoer <adb@geac.com>.
Briefly, many ISPs today have cheap local links in between. The Squid
has a great feature to specify:
#local_domain bla bla.bla other.bla ...
Excellent, objects from these domains always fetched from source, thus
bypassing hierarchy of parents/siblings and saving global bandwidth(main
purpose of cache). But, what happens when local link is broken?
No objects fetched(server down - message from browser). Not to define
#local_domain,- use <source_ping on> instead,- but neighbour/parent
cache may answer faster! No solution today.
Answer might be very simple:
#neighbour_domain myBest.neighbour mySecond.dom ...
#neighbour_domain_timeout 750 (ms)
Or even better:
#neighbour_domain nearest.com 600(ms, timeout)
#neighbour_domain NotSoNear.org 2000
#neighbour_link_restore 30 (mins, to prevent unneeded waitings while
link is not functioning).
Going that way Squid can use local link while it works and provide
backup routing via parent(s), when local link is down.
Well, that's it. Now is Your turn, Squid developers/testers/users.
-- Sincerely, Gregory, admin of IsraCom. ------_WWW_-------------------cut--out--of--there--------------------- / .~. \ O U O ONLY sincere gratitude You may express to \__v__/ Gregory Borodiansky, mailto:aliceoy@isracom.co.il ___/ \___ Tel.+972-6-6271674, fax.+972-6-6271687, ALICEOY http://www.isracom.co.il / \ @ / \ I'm always here to help You. But note also: ~ISRACOM~ "WWW stands for World Wide >>>Wait<<<." __/ v \__ --U. Nknown Hacker. Internet XXX protocols. Terms, WWW CO\ /ILReceived on Sat Feb 15 1997 - 20:08:46 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:34:28 MST