Did you look at the logs to find out what Squid might be trying so
urgently to tell you? ;-)
Seriously, Squid can generate very large access and store logs because
every request is logged. cache.log should not be large unless there is
a problem that Squid is complaining about. You can safely disable
store.log in the vast majority of environments without losing any useful
information. access.log can be disabled if you really don't want to
know about the traffic. cache.log should never be disabled, but should
not be very big.
logrotate is the tool on Red Hat systems that you'll want to use to
rotate your logs on a regular basis (once a week is usually sufficient
for low load machines, we rotate our ISP boxes every day because it
isn't difficult to generate 2GB+ logs by the middle of the week when
seeing 100+ reqs/sec). If you installed from an RPM the configuration
should already be in place to rotate weekly. If you didn't install from
RPM, you're welcome to grab my SRPM to see an example
/etc/logrotate.d/squid file to start from (or you could install a new
instance of Squid from RPM to rule out misconfiguration issues during
the build).
http://www.swelltech.com/support/squidpackages.html
Just some thoughts...
Squid wrote:
> Here I will reply to myself...
> I ran out of disk space....my log files were each over 100Megs each... and
> this is on a test mechine with one user...
>
> Here is some more info...Redhat 7.3 on an old 200 mhz machine...
>
> Any Ideas on how to limit the log size...or even why a text log is this
> big...
>
> Bruce
>
> Squid wrote:
>
>>Squid worked fine yesterday now I get this...any ideas
>>
>> storeUfsWriteDone:got failure (-6)
>>
>>Bruce
-- Joe Cooper <joe@swelltech.com> Web caching appliances and support. http://www.swelltech.comReceived on Thu Jun 13 2002 - 13:52:26 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:40 MST