Re: OOM fixes 2/5

From: Andrea Arcangeli
Date: Fri Jan 21 2005 - 02:08:42 EST


On Thu, Jan 20, 2005 at 10:46:45PM -0800, Andrew Morton wrote:
> Thus empirically, it appears that the number of machines which need a
> non-zero protection ratio is exceedingly small. Why change the setting on
> all machines for the benefit of the tiny few? Seems weird. Especially
> when this problem could be solved with a few-line initscript. Ho hum.

It's up to you, IMHO you're doing a mistake, but I don't mind as long as our
customers aren't at risk of early oom kills (or worse kernel crashes)
with some db load (especially without swap the risk is huge for all
users, since all anonymous memory will be pinned like ptes, but with ~3G
of pagetables they're at risk even with swap). At least you *must*
admit that without my patch applied as I posted, there's a >0 probabity
of running out of normal zone which will lead to an oom-kill or a
deadlock despite 10G of highmem might still be freeeable (like with
clean cache). And my patch obviously cannot make it impossible to run
out of normal zone, since there's only 800m of normal zone and one can
open more files than what fits in normal zone, but at least it gives the
user the security that a certain workload can run reliably. Without this
patch there's no guarantee at all that any workload will run when >1G of
ptes is allocated.

This below fix as well is needed and you won't find reports of people
reproducing this race condition. Please apply. CC'ed Hugh. Sorry Hugh, I
know you were working on it (you said not in the weekend IIRC), but I've
been upgraded to latest bk so I had to fixup quickly or I would have to
run the racy code on my smp systems to test new kernels.

From: Andrea Arcangeli <andrea@xxxxxxx>
Subject: fixup smp race introduced in 2.6.11-rc1

Signed-off-by: Andrea Arcangeli <andrea@xxxxxxx>

--- x/mm/memory.c.~1~ 2005-01-21 06:58:14.747335048 +0100
+++ x/mm/memory.c 2005-01-21 07:16:15.318063328 +0100
@@ -1555,8 +1555,17 @@ void unmap_mapping_range(struct address_

spin_lock(&mapping->i_mmap_lock);

+ /* serialize i_size write against truncate_count write */
+ smp_wmb();
/* Protect against page faults, and endless unmapping loops */
mapping->truncate_count++;
+ /*
+ * For archs where spin_lock has inclusive semantics like ia64
+ * this smp_mb() will prevent to read pagetable contents
+ * before the truncate_count increment is visible to
+ * other cpus.
+ */
+ smp_mb();
if (unlikely(is_restart_addr(mapping->truncate_count))) {
if (mapping->truncate_count == 0)
reset_vma_truncate_counts(mapping);
@@ -1864,10 +1873,18 @@ do_no_page(struct mm_struct *mm, struct
if (vma->vm_file) {
mapping = vma->vm_file->f_mapping;
sequence = mapping->truncate_count;
+ smp_rmb(); /* serializes i_size against truncate_count */
}
retry:
cond_resched();
new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
+ /*
+ * No smp_rmb is needed here as long as there's a full
+ * spin_lock/unlock sequence inside the ->nopage callback
+ * (for the pagecache lookup) that acts as an implicit
+ * smp_mb() and prevents the i_size read to happen
+ * after the next truncate_count read.
+ */

/* no page was available -- either SIGBUS or OOM */
if (new_page == NOPAGE_SIGBUS)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/