Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?

MOLNAR Ingo (mingo@chiara.csoma.elte.hu)
Fri, 27 Nov 1998 01:01:39 +0100 (CET)


hm, after thinking some more, i see the point. We should not throttle an
application that knows memory limits _exactly_ (eg. it's a well tuned
simulation) and stops just short of dirtying too much. Any write
throttling effectively decreases the max amount of core memory visible to
a 'good' application.

the problem is not 'too much dirty data', the problem is 'the two data
caches do not couple well'. By throttling we do not make the system
progress any better, we just achieve that kswapd doesnt get triggered ...
which is only a symptom. Too much dirty data is just a fact of life. If
one wants to have nicely overlapped IO one'd do explicit fsyncs in the
application anyway. But generic throttling for writes just punishes good
applications i think.

we can also use heuristics in the kernel to sort out dirty buffers that
are not likely to be reused soon, eg. we use a 'cache_drop_behind(bh)'
thing when doing RAID reconstruction. (admittedly that case is special
because we have a guaranteed very big single-pass write) We could do the
same for big sequential writes. But this is definitely not a 2.2 issue.

-- mingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/