tor 2013-07-25 klockan 18:53 +1200 skrev Amos Jeffries:
> Which problem specifically? that churn exists? that it can grow big +
> churn? races between clients? or that letting it out to disk can cause
> churn to be slooow?
In the design used by Squid-2 there is quite a bit of churn in the
x-vary object, and it's seen growing quite big in some extreme cases
("Vary: cookie" iirc).
Races between clients have been seen.
Also conflicts between x-vary updates and clients aborting, causing the
new x-vary object to also be discarded making Squid forget the map, but
that's a bug.
Proper handling of cache validations is the main concern.
> I have been playing with the idea of locking these into memory cache, or
> using a dedicated memory area just for them to avoid the speed issues. A
> specialized store for them will also allow us to isolate the
> secondary-lookup logic in that stores lookup process - it can identify
> the variant and recurse down to other stores for the final selection
> using the extra key bits.
What to use as permanent store?
And you want to store each 304 mapping response separately so a scan can
rebuild the map?
And what about stores not having an index? IIRC we have as goal to
optionally not have an in-memory cache index at all.
> I believe that they can be generated from a disk scan and if necessary
> we can add swap.state TLV entries for the missing x-vary meta details to
> be reloaded quickly.
The x-vary meta is not very small. For each request header combination
it's
- request header contents
- timing details for validation
- which object variant to map to
And there is also a map of known object variants and their ETag values
and also Content-Encoding, the latter to work around dynamic gzip
brainfart in many major web servers including Apache.
> That would make them churn particularly badly on
> startup, but avoid the necessity to store anywhere long-term, and help
> detect obsolete variants undeleted from disk.
The total system churn at startup is already majorly bad with both ufs
and rock stores.. caches are growing quite large today with current disk
& memory prices.
Regards
Henrik
Received on Thu Jul 25 2013 - 08:59:30 MDT
This archive was generated by hypermail 2.2.0 : Thu Jul 25 2013 - 12:00:11 MDT