On 03/22/2013 01:43 AM, babajaga wrote:
>> Your OS assigns workers to incoming connections. Squid does not
>> control that assignment. For the purposes of designing your
>> storage, you may assume that the next request goes to a random
>> worker. Thus, each of your workers must cache large files for files
>> to be reliably cached.
> But, I think such a config SHOULD avoid duplication:
>
> if ${process_number}=1
> cache_dir aufs /cache4/squid/${process_number} 170000 32 256
> min-size=31001 max-size=200000
> cache_dir aufs /cache5/squid/${process_number} 170000 32 256
> min-size=200001 max-size=400000
> cache_dir aufs /cache6/squid/${process_number} 170000 32 256
> min-size=400001 max-size=800000
> cache_dir aufs /cache7/squid/${process_number} 170000 32 256
> min-size=800000
> endif
Well, yes, restricting large file caching to one worker avoids
duplication at the expense of not caching any large files for all other
workers. Since all workers get requests for large files, all workers
should cache them or none should. And by "cache", I mean store them in
the cache and get them from the cache.
With the above configuration, only one worker will store large files and
serve large hits. All other workers will not store large files and will
not serve large hits.
This is why the above configuration does not work well and most likely
does not do what the admin indented it to do. It does avoid duplication
though :-).
Alex.
Received on Fri Mar 22 2013 - 20:33:31 MDT
This archive was generated by hypermail 2.2.0 : Sat Mar 23 2013 - 12:00:05 MDT