Having tried to "decipher" the principles of rock some time ago, my
impression at that time was, that this long time period of rebuild is caused
by design of rock, as there must be a scan of the rock area to find all
content and then to init the in-memory-pointers of squid. 16GB of rock
storage will contain huge number of rock-items, so it will take a lot of
time.
From "ancient" experience in developing real-time, logging filesystems, the
fastest approach for scanning a journal file, for example, was simply to
create one huge file, contiguous. So a sequential scan could be done using
lowest level disk access routines, reading large amounts of (sequential)
disk blocks in one I/O, and then deblock ourselves. And doing double
buffered reads, to read next large amount of disk blocks during the time,
previous amount is deblocked/analyzed.
So the filesystem of the OS was completely circumvented, only used for
creation of the file itself.
In LINUX a RAW filesystem, I presume, could be used to have the same effect
to store rock, as no index-nodes etc. are necessary to manage rock storage
space.
Drawback: Buffering or merging of writes will not occure, unless explicitly
programmed yourself.
Or, a more drastical and demanding solution: To use hashing for the
squid-in-memory-to-rock-disk-pointers.
Just my few cents.
-- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/question-about-large-rock-tp4664469p4664755.html Sent from the Squid - Users mailing list archive at Nabble.com.Received on Wed Feb 12 2014 - 17:26:38 MST
This archive was generated by hypermail 2.2.0 : Thu Feb 13 2014 - 12:00:05 MST