Hello Chris
Thanks for your fast reply.
I try to put a shell script that read the Squid log, and use it to run
wget with "-r -l1 -p" flag, but it also get its on pages, making a
infinit loop, and I can't resolve it.
Is there a shell script that do it as I wish ?
Thanks again
2010/10/2 Chris Woodfield <rekoil_at_semihuman.com>:
> It's trivial to run a wget or curl on the same server that the squid proxy is on and access pages through it, directing the output to /dev/null, in order to prime the cache. But there's no explicit way to tell squid to "please pull this URL into your cache" without an actual HTTP request for that page.
>
> Also, don't forget that only objects that aren't marked with no-cache or no-store cache-control headers will be stored in squid's cache, which for sites like google's main pages will result in not very much being cached at all beyond inline graphics.
>
> -C
>
> On Oct 2, 2010, at 11:05 AM, flaviane athayde wrote:
>
>> Hello Squid User Group
>>
>> I wonder how I can configure Squid to load web pages ahead.
>> A while ago, I saw a perl script that forced ahead caching of web pages.
>> I searched the forums and only meeting of the topology where requests
>> are made from the Internet to a Squid server that redirects to
>> especific webservers. This is not what I want.
>>
>> What I want is that the request for a page to Squid, Squid itself
>> makes requests for pages from the related links of the original page!
>>
>> For example, when I open the page http://google.com, Squid would also
>> request the pages http://videos.google.com, http://news.google.com etc
>> and maintain it on cache, so when I open http://videos.google.com,
>> squid returns the cached page.
>> I think this is perfectly possible, but I don't found references on
>> how to do this.
>>
>> Please let me know if I was not clear because I am using a translator.
>>
>> --
>> Flaviane
>
>
-- FlavianeReceived on Sat Oct 02 2010 - 18:33:01 MDT
This archive was generated by hypermail 2.2.0 : Tue Oct 05 2010 - 12:00:02 MDT