On Fri, 2002-08-09 at 23:20, Joey Coco wrote:
>
> Hello,
>
> There is an obvious reason you folks copy data from server --> object
> store --> to client in chunks, rather than in whole, correct?
>
> For example, if the request was a fairly large HTML document, the code
> only reads in a few lines at a time, stores it, then copies to client. I'm
> just curious why it's done this way, rather than reading entire object in,
> storing it, then copy entire object to client? (not complaining, just
> curious).
Because it's essentially the only way that scales.
If you do not transfer blocks of data as they become available, you have
to:
* Buffer (on memory or disk) the entire object
* Keep the client from timing out somehow.
The buffering issue is very serious: imagine 100 clients, all requesting
a different 50 MB file at the same time: you would need 5GB of storage
just to fulfil the request, whereas when you are sending smaller blocks
of data, you can decide whether to keep or throw away the data that the
client has, allowing a 486 with 100Mb of ram to serve the hypothetical
example above - at (moderately) high speed.
Also, *most* OS calls can not send 100MB files in one call.
Rob
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:16:02 MST