On 27/10/2013 11:34 a.m., Ahmad wrote:
> hi ,
> im trying smp and rock ,
>
> i followed example of
> http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> ==========================================
>
> the issue is ,
> in squid.conf i have
> ==================================
> dns_v4_first on
> # 3 workers, using worker #1 as the frontend is important
> workers 3
> cpu_affinity_map process_numbers=1,2 cores=1,3
> cpu_affinity_map process_numbers=3 cores=5
> #cpu_affinity_map process_numbers=3 cores=6
> if ${process_number} = 1
> include /etc/squid/frontend.conf
> else
> include /etc/squid/backend.conf
> endif
> ================================
>
>
> in backend.conf i have :
> cache_dir rock /rock${process_number} 10000 max-size=32768 swap-timeout=350
>
> the error is
> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-rock4.shm): (2) No
> such file or directory
> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-rock3.shm): (2) No
> such file or directory
> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-rock2.shm): (2) No
> such file or directory
>
>
>
> but
> .
> .
> .
> .
>
> if i put in backend.conf
> cache_dir rock /rock1 10000 max-size=32768 swap-timeout=350
> cache_dir rock /rock2 10000 max-size=32768 swap-timeout=350
> cache_dir rock /rock3 10000 max-size=32768 swap-timeout=350
>
>
>
> it work !!!!!!
>
>
> in conclusion ,
>
>
>
> Q1- does that mean that in rock dir , it must be shared between squid
> processess ???
Sort of.
* Rock is fully SMP-aware so there is no reason to use the per-process
macros to limit it.
* Rock uses a special type of process we call disker to do the HDD I/O.
This requires SMP channels between the worker(s) using that cache and
teh disker doing the I/O. If you use ${process_number} or
${process_name} macros these channels are never setup and things WILL break.
* With great care the if...else...endif directives can be used to
prevent particular workers from using a rock dir. BUT only to eliminate
*worker* process numbers from accessing it.
(if you restrict it to only the worker having access then the disker
will not be able to access it for I/O and things break)
> if not ,
>
> wt could be the solving of the error
> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-rock3.shm): (2) No
> such file or directory
>
>
> ....
>
>
> note that using aufs dir , no problems there !!
The config is intended to setp a frontend without caching and a backend
with caching. Making the rock accessible from caching by the frontend is
an optimization to reduce some CPU on the backends. Either use the
example config as written with the rock dir accessible to all frontend
and backend proxies. Or place it in the backend only.
AUFS is not SMP-aware so the ${process_*} macros are mandatory for now.
When AUFS is made to be SMP-aware they will break in similar way.
PS: if you want to experiment, you could try given the frontend and
backend config two slightly different cache_dir lines. So the frontend
has a "read-only" flag but otherwise identical settings. In theory that
would make the frontend able to HIT on the rock cache, but only the
backends able to store things there.
Amos
Received on Sun Oct 27 2013 - 06:47:29 MDT
This archive was generated by hypermail 2.2.0 : Tue Oct 29 2013 - 12:00:06 MDT