I have an application that does http gets of large files (>500mb) and in
order to minimize the pain of losing a connection and having to
reconnect, it requests 1mb file chunks and reassembles them once all
pieces have been received. I am hoping to use squid to reduce the load
on the main webserver farm and eventually distribute the content
geographically without worrying about replication issues.
How does squid handle caching of these file chunks? Does it fetch the
entire file in order to serve the request for 1mb or only request the
part of the file being requested by the client? The client side always
requests files in the exact same way IE chunk 1 = 1-1024, chunk 2 = 1025
- 2048, chunk 3 - 2049 - 3072, etc. so there is value to caching the
individual file chunks.
Any input is greatly appreciated.
Dave Theodore
Received on Mon Mar 22 2004 - 16:40:03 MST
This archive was generated by hypermail pre-2.1.9 : Thu Apr 01 2004 - 12:00:02 MST