> -----Ursprüngliche Nachricht-----
> Von: Amos Jeffries [mailto:squid3_at_treenet.co.nz]
> Gesendet: Mittwoch, 24. Oktober 2012 01:08
> An: squid-users_at_squid-cache.org
> Betreff: Re: [squid-users] feature request: setting location of
> coordinator.ipc and kidx.ipc during runtime?
>
> On 24.10.2012 03:38, Rietzler, Markus (RZF, SG 324 /
> <RIETZLER_SOFTWARE>) wrote:
> > hi,
> >
> > we want to use squid with smp workers.
> > workers are running fine. now also logroate works (although not as
> > expected. see my other mail "[squid-users] question of understanding:
> > squid smp/workers and logfiles", works only with access_log for each
> > worker not one single one).
> >
> > now there is only one problem.
> >
> > when we compile squid we use
> >
> > ./configure --prefix /default/path/to/squid
> >
> > in our production environment squid lies under a different path (eg.
> > /path/to/squid). we also use several instances of squid, etc. one
> > internet, one intranet, one extranet etc. each one with its own
> > directory structure like etc, run, log, cache etc.
> >
> > via squid.conf we can set every required path (log, log_file_daemon,
> > icons, error, unlinkd etc) but not for the ipc-location.
> >
> > in src/ipc/Port.cc the location is hardcoded:
> >
> > const char Ipc::coordinatorAddr[] = DEFAULT_STATEDIR
> > "/coordinator.ipc";
> > const char Ipc::strandAddrPfx[] = DEFAULT_STATEDIR "/kid";
> >
> > I can patch src/ipc/Makefile to have localstatedir point to a other
> > dir then /default/path/to/squid/var (that's how localstatedir will be
> > expanded in the Makefile). but this is not really what we want. we
> > want to be able to have the location set via squid.conf or
> > environment
> > var during runtime.
> >
> > we tried to use something like
> >
> > const char Ipc::coordinatorAddr[] = Config.coredump_dir
> > "/coordinator.ipc";
> >
> > but then we get compile erros.
> >
> > is it possible to create some patch to have to set the location of
> > ipc-files during runtime.
>
> Yes and no.
>
> These are network sockets needing to be accessed by all instances of
> the multiple processes which form Squid. There is no reason to touch or
> change them.
> If we allow reconfiguration of where one is placed, anyone could
> accidentally place that inside if...else conditions and will be unable
> to operate their Squid reliably when the internal communication channels
> to the coordinator become disconnected.
> If we allowed you to register multiple "/some/shared/kid1.ipc" then
> start several differently configured Squid you could face the second
> instance crashing with unable to open socket errors or you could zombie
> the existing process, or you could cause crossover between the two
> coordinators or the two workers.
> We really do not want to have to assist with debugging that type of
> problem needlessly....
>
sounds reasonable
>
> The SMP support in Squid is designed to remove any reason why you
> should need to operate multiple different Squid installations on one
> box. It is almost but not quite complete, if you find a particular
> feature (like that logs bug) you need to segment but are unable to do so
> please pint out. The UDS channel sockets notwithstanding as they are the
> mechanism by which segmentation is coordinated and enforced.
>
>
> To operate Squid with multiple segregated run-time environments for
> different clients I suggest you look at re-designing your squid.conf
> along these lines:
>
> squid.conf:
> workers 3
> /etc/squid/squid.conf.${process_id}
>
>
> With squid.conf.1, squid.conf.2, squid.conf.3 containing a complete
> copy of what would have been squid.conf for the environment you want to
> present to your client base that process is serving.
> When you need to guarantee a per-worker resource like log files use
> ${process_id} as art of the path or filename like the above example. You
> can also use ${process_name} the same way.
>
> FUN: If you need two workers to both present one shared environment you
> can use symlinks to point squid.conf.4 at squid.conf.5 for example and
> the coordinator will ensure they share resources as well as config
> files.
> * clashes with using the ${process_id} macro in paths
>
> MORE FUN: to share resources between environments, just configure the
> same lines for the cache location etc in multiple per-worker squid.conf.
> Again the coordinator will link the processes together with the shared
> resource.
>
> PS: we currently only provide one shared memory cache. So segmenting
> that is not possible the old style local caches can be used instead. TMF
> have a project cleaning up the cache systems underway to make things
> more flexible, get in touch if you need any changes there.
>
> Amos
ok, this sounds like a good idea. at the moment we have 3 squids running for internet, intranet and extranet. so each one have a own squid.conf and on acl-rules.
we could use the trick with squid.conf.{$process_id} etc. but there is one small thing that does not fit that well: at them moment we have load balancing setup, running these 3 squids on 4 different machines. with individual and separate squid we could stop eg. the internet squid on machine1 without touching all the other squids. so with one single squid coordinator/master process we could only start all or none of the squids, we can't stop one instance on one machine - maybe then test some new config or squid version etc. with the 3 squids we could also use different squid versions if that should be needed in some (maybe very strange case)...
Received on Wed Oct 31 2012 - 10:58:47 MDT
This archive was generated by hypermail 2.2.0 : Wed Oct 31 2012 - 12:00:05 MDT