Re: 2.6.29 pat issue

From: Thomas Hellstrom
Date: Tue Mar 10 2009 - 04:22:19 EST


Pallipadi, Venkatesh wrote:
On Fri, Mar 06, 2009 at 03:44:07PM -0800, Thomas Hellstrom wrote:
We get the warning when we insert RAM pages using vm_insert_pfn().
Having normal RAM pages backing a PFN papping is a valid thing.


OK. Below is the updated patch that should fix this fully. Can you confirm?

Thanks,
Venki


Yes, this patch should fix the problem. I'm still concerned about the overhead of going through the
RAM test for each inserted page.

Why can't a pfn_valid() test be used in vm_insert_pfn()?

Thanks,
Thomas



From: Venkatesh Pallipadi <venkatesh.pallipadi@xxxxxxxxx>
Subject: [PATCH] VM, x86 PAT: Change implementation of is_linear_pfn_mapping

Use of vma->vm_pgoff to identify the pfnmaps that are fully mapped at
mmap time is broken, as vm_pgoff can also be set when full mapping is
not setup at mmap time.
http://marc.info/?l=linux-kernel&m=123383810628583&w=2

Change the logic to overload VM_NONLINEAR flag along with VM_PFNMAP to
mean full mapping setup at mmap time. This distinction is needed by
x86 PAT code.

Regression reported at
http://bugzilla.kernel.org/show_bug.cgi?id=12800

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@xxxxxxxxx>
Signed-off-by: Suresh Siddha <suresh.b.siddha@xxxxxxxxx>
---
arch/x86/mm/pat.c | 5 +++--
include/linux/mm.h | 8 +++++++-
mm/memory.c | 6 ++++--
3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 2ed3715..640339e 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -677,10 +677,11 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
is_ram = pat_pagerange_is_ram(paddr, paddr + size);
/*
- * reserve_pfn_range() doesn't support RAM pages.
+ * reserve_pfn_range() doesn't support RAM pages. Maintain the current
+ * behavior with RAM pages by returning success.
*/
if (is_ram != 0)
- return -EINVAL;
+ return 0;
ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);
if (ret)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 065cdf8..6c3fc3a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -127,6 +127,12 @@ extern unsigned int kobjsize(const void *objp);
#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_RESERVED | VM_PFNMAP)
/*
+ * pfnmap vmas that are fully mapped at mmap time (not mapped on fault).
+ * Used by x86 PAT to identify such PFNMAP mappings and optimize their handling.
+ */
+#define VM_PFNMAP_AT_MMAP (VM_NONLINEAR | VM_PFNMAP)
+
+/*
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
@@ -145,7 +151,7 @@ extern pgprot_t protection_map[16];
*/
static inline int is_linear_pfn_mapping(struct vm_area_struct *vma)
{
- return ((vma->vm_flags & VM_PFNMAP) && vma->vm_pgoff);
+ return ((vma->vm_flags & VM_PFNMAP_AT_MMAP) == VM_PFNMAP_AT_MMAP);
}
static inline int is_pfn_mapping(struct vm_area_struct *vma)
diff --git a/mm/memory.c b/mm/memory.c
index baa999e..d7df5ba 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1665,9 +1665,10 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
* behaviour that some programs depend on. We mark the "original"
* un-COW'ed pages by matching them up with "vma->vm_pgoff".
*/
- if (addr == vma->vm_start && end == vma->vm_end)
+ if (addr == vma->vm_start && end == vma->vm_end) {
vma->vm_pgoff = pfn;
- else if (is_cow_mapping(vma->vm_flags))
+ vma->vm_flags |= VM_PFNMAP_AT_MMAP;
+ } else if (is_cow_mapping(vma->vm_flags))
return -EINVAL;
vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP;
@@ -1679,6 +1680,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
* needed from higher level routine calling unmap_vmas
*/
vma->vm_flags &= ~(VM_IO | VM_RESERVED | VM_PFNMAP);
+ vma->vm_flags &= ~VM_PFNMAP_AT_MMAP;
return -EINVAL;
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/