On 21 Oct 2002, Robert Collins wrote:
> Whilst this will reduce the memory races, it won't benefit the hot
> object cache directly. The change I am proposing (IMO) gives the hot
> object cache for free.
The memory/object size races is sufficient to call for this I think. I
don't see anything in your proposal that fixes this in a good manner.
Corner example case:
Server object is 1GB large, content-length unknown. When 20
KB has been retreived we get another request for the same object.
First client is a modem user downloading at about 5K /second. The second
client is a DSL or other high speed client downloading at a significantly
higher rate (also consider opposite situation)
We do not want to cache objects larger than 20MB.
How do we handle these two requests?
And I do not see at all how the single/multi client decision relates to
ranges, neither either processing or merging..
> To fix ranges, I have had to separate this out already. It's 90% done in
> fix_ranges. It is missing the replacement logic, but each fragment in
> the hot object store is able to be freed without affecting the fragments
> before or after in the data stream. In other words, I agree with you.
Good.
> Mm. I'm inclined to organically add this on to our current capability.
> I'd do this by:
> Freeing cache memory by memory LRU not memory object.
> Recording LRU data in each datastream fragment.
Minor Note: hot object caches is where using other replacement policied
than LRU is proven to be useful..
> Let me put it this way: I need to alter the swapout code for fix ranges.
> I have a couple of choices. Do you object to the one I suggested?
Looks fine at a first glance, but I am a little unsure on the detail of
having swapout as a store client..
Regards
Henrik
Received on Sun Oct 20 2002 - 17:35:32 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:16:58 MST