Re: [patch 0/5] Support for sanitization flag in low-level pageallocator

From: Arjan van de Ven
Date: Sun May 31 2009 - 10:38:33 EST


On Sat, 30 May 2009 12:39:33 +0200
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

> > > So if you zero on free, the next allocation will reuse the zeroed
> > > page. And due to LIFO that is not too far out "often", which
> > > makes it likely the page is still in L2 cache.
> >
> > Thanks for pointing this out clearly, Arjan.
>
> Thing is, the time between allocation and use is typically orders of
> magnitude less than between free and use.
>
>
> Really, get a life, go fix real bugs. Don't make our kernel slower

the "make it slower" is an assumption on your part.
I'm not convinced. Would like to see data!

You're balancing a few things in your assumption
* The %age of pages that get zeroed on free, but not used in time and
get flushed from L2 before they are used
* The %age of pages that today doesn't get zeroed
versus
* The %age of the page that you are not going to read if you zero on use
but does wipe a portion of L1 cache

add to that
* Reading a just allocated page is much more rare than writing to it.
It's just zeros after all ;-)
it is unclear (and cpu dependent) if writing makes it matter if the
old (zero) data is in the cache or not, reducing the value of your
"but it's now in the cache" value argument.
* My assumption is that allocations are more latency sensitive than
free. After all, on allocate, you're going to use it, while on free
you're done with what you wanted to do, and performance of that on
average is assumed by me to matter less.
* We "need" to zero-on-allocate while holding the mmap semaphore,
on free we clearly don't. We know this gives lock contention in
highly threaded workloads... and zero-on-free gets rid of that
entirely.


--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/