"Stephen R. van den Berg" <srb@cuci.nl> writes:
> Stewart Forster wrote:
> >Memory mapped store entries is BAD because we have no control over how the
> >system stalls when it needs to flush data out to disk,
>
> The process shouldn't be waiting on this, since the kernel would be
> doing this in the background (if at all; since we're the only one opening
> the file, the system could keep all pages in memory until we terminate
> and then write them back).
The kernel will block the current process if it needs to do any
paging. In the worst case, you need to page in, but there's no memory,
so the kernel need to page something else out to get the memory, but
there's already a large write queue so it takes some seconds (or 10s
of seconds if it's a large cache. :) Try running sync(1) on a machine
with 200meg of disk cache) to do it.
> > or with it stalling
> >when the data needs to be paged in from disk to do a change.
>
> The process would stall if it had to be paged in from disk, yes. But,
> since we're the only one that have the file open, apart from the initial
> read when we read in the file the first time, there shouldn't really
> be any disk reads since everything can be kept in memory.
In principle yes. However the kernel will be working against you. The
same principle that swaps out idle processes to increase the disk
cache will be paging out untouched pages in a mmap() to increase the
disk cache.
Basically, it defeats the point of the async-io stuff.
> >I vote VERY strongly against any form of mmap() data access unless it is
> >wrapped within a thread to separate page fault activity away from the main
> >thread.
>
> Do you know of any kernels which do not deal with these mmap'd files
> intelligently as conjectured above?
Umm, linux? An intelligent kernel will not do what you want. The
intelligent kernel will:
see large amounts of disk activity => larger disk cache.
larger disk cache => swap out or unmapping of pages.
unmapping => throwing away untouched pages.
so it'll delete significant portions of your mmap(), that the large
disk activity will make slow to pull back it.
> I guess we'd have to maintain separate mmap/no-mmap cases anyway.
> BTW, on Linux (2.0.35), this particular mmap approach appears to be showing
> benefits only, no noticeable drawbacks so far.
How busy is this cache? When it's handling 150 new TCPs / second on a
40Gig disk array, thats when you can talk about how well it does. :)
Michael.
Received on Tue Jul 29 2003 - 13:15:52 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:53 MST