>>> On 26/03/2007 at 10:40, "Guillaume Smet" <guillaume.smet@gmail.com>
wrote:
> On 3/26/07, Henrik Nordstrom <henrik@henriknordstrom.net> wrote:
>> One way is to set up a separate set of cache_peer for these robots,
>> using the no-cache cache_peer option to avoid having that traffic
>> cached. Then use cache_peer_access with suitable acls to route the
robot
>> requests via these peers and deny them from the other normal set of
>> peers.
>
> AFAICS, it won't solve the problem as the robots won't be able to
> access the "global" cache readonly.
As I understand it, the original aim here was to avoid cache pollution
(when a robot wanders in, it effectively resets the last-accessed time
on every object, rendering LRU useless and evicting popular objects
to make space for objects only the robot cares about) - in which case,
would changing the cache_replacement_policy setting not be a better
starting point?
LFUDA should be a close approximation to the result the original
poster
wanted: anything getting hit only by a robot will still not be
'frequently'
used, so although it will be cached initially, it will soon be evicted
again.
James.
Received on Mon Mar 26 2007 - 06:40:30 MDT
This archive was generated by hypermail pre-2.1.9 : Sat Mar 31 2007 - 13:00:02 MDT