On 24.10.2012 03:38, Rietzler, Markus (RZF, SG 324 /
<RIETZLER_SOFTWARE>) wrote:
> hi,
>
> we want to use squid with smp workers.
> workers are running fine. now also logroate works (although not as
> expected. see my other mail "[squid-users] question of understanding:
> squid smp/workers and logfiles", works only with access_log for each
> worker not one single one).
>
> now there is only one problem.
>
> when we compile squid we use
>
> ./configure --prefix /default/path/to/squid
>
> in our production environment squid lies under a different path (eg.
> /path/to/squid). we also use several instances of squid, etc. one
> internet, one intranet, one extranet etc. each one with its own
> directory structure like etc, run, log, cache etc.
>
> via squid.conf we can set every required path (log, log_file_daemon,
> icons, error, unlinkd etc) but not for the ipc-location.
>
> in src/ipc/Port.cc the location is hardcoded:
>
> const char Ipc::coordinatorAddr[] = DEFAULT_STATEDIR
> "/coordinator.ipc";
> const char Ipc::strandAddrPfx[] = DEFAULT_STATEDIR "/kid";
>
> I can patch src/ipc/Makefile to have localstatedir point to a other
> dir then /default/path/to/squid/var (that's how localstatedir will be
> expanded in the Makefile). but this is not really what we want. we
> want to be able to have the location set via squid.conf or
> environment
> var during runtime.
>
> we tried to use something like
>
> const char Ipc::coordinatorAddr[] = Config.coredump_dir
> "/coordinator.ipc";
>
> but then we get compile erros.
>
> is it possible to create some patch to have to set the location of
> ipc-files during runtime.
Yes and no.
These are network sockets needing to be accessed by all instances of
the multiple processes which form Squid. There is no reason to touch or
change them.
If we allow reconfiguration of where one is placed, anyone could
accidentally place that inside if...else conditions and will be unable
to operate their Squid reliably when the internal communication channels
to the coordinator become disconnected.
If we allowed you to register multiple "/some/shared/kid1.ipc" then
start several differently configured Squid you could face the second
instance crashing with unable to open socket errors or you could zombie
the existing process, or you could cause crossover between the two
coordinators or the two workers.
We really do not want to have to assist with debugging that type of
problem needlessly....
The SMP support in Squid is designed to remove any reason why you
should need to operate multiple different Squid installations on one
box. It is almost but not quite complete, if you find a particular
feature (like that logs bug) you need to segment but are unable to do so
please pint out. The UDS channel sockets notwithstanding as they are the
mechanism by which segmentation is coordinated and enforced.
To operate Squid with multiple segregated run-time environments for
different clients I suggest you look at re-designing your squid.conf
along these lines:
squid.conf:
workers 3
/etc/squid/squid.conf.${process_id}
With squid.conf.1, squid.conf.2, squid.conf.3 containing a complete
copy of what would have been squid.conf for the environment you want to
present to your client base that process is serving.
When you need to guarantee a per-worker resource like log files use
${process_id} as art of the path or filename like the above example. You
can also use ${process_name} the same way.
FUN: If you need two workers to both present one shared environment you
can use symlinks to point squid.conf.4 at squid.conf.5 for example and
the coordinator will ensure they share resources as well as config
files.
* clashes with using the ${process_id} macro in paths
MORE FUN: to share resources between environments, just configure the
same lines for the cache location etc in multiple per-worker squid.conf.
Again the coordinator will link the processes together with the shared
resource.
PS: we currently only provide one shared memory cache. So segmenting
that is not possible the old style local caches can be used instead. TMF
have a project cleaning up the cache systems underway to make things
more flexible, get in touch if you need any changes there.
Amos
Received on Tue Oct 23 2012 - 23:08:05 MDT
This archive was generated by hypermail 2.2.0 : Wed Oct 24 2012 - 12:00:04 MDT