On Mon, 9 Jun 1997, Robert Barta wrote:
> --> they're read from disk. This would only apply to compressible files - graphics
> --> probably excepted - but it would be an interesting statistic to know how
> --> much of the data in a squid cache is compressible. Anyone have an inactive
> --> cache image they can compress to see what savings they get?
>
> I wonder whether this gains anything. According to our stats HTML docs (primary
> target of compression) are only 6% of the total turnover in bytes.
Compression in itself could be slower than the worthwhile effort.
Tokenization compression on HTML could however yield good results while
maintaining a high speed through-put. eg: a token table of all the HTML
tags (up to html 3.2 currently) including depreciated ones will change
tags that can be anything up to say 20 bytes long into 2 or 3 bytes of
information. Some type of 'word/text' compression could also yield good
results while keeping up throughput.
Of course, this would only be good in large sites, because as mentioned,
html is usually a small portion of the data.
If you had a hardware compression card in the machine, then throughput
would not be an issue and you could practically run everything through
it.
-=[ Stuart Young (Aka Cefiar) ]=--------------------------------------
| http://amarok.glasswings.com.au/ | cefiar@amarok.glasswings.com.au |
----------------------------------------------------------------------
| Jake and Elwood - The Blues Brothers! |
| "You traded it?! You traded the Blues Mobile for this?" |
| "No. For a microphone." "A microphone? OK I can see that." |
Received on Mon Jun 09 1997 - 03:15:08 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:35:29 MST