ARM-related ptep_to_address() fix

From: William Lee Irwin III
Date: Sat Apr 17 2004 - 00:51:59 EST


rmk mentioned that ARM was borked as the relation, assumed by generic rmap,
PTRS_PER_PTE*sizeof(pte_t) == PAGE_SIZE, fails to hold. The following patch,
developed jointly with him (or depending on POV, by him with me acting as
codemonkey), is reported to resolve the issue.

Specifically, while ARM dedicates an entire PAGE_SIZE -sized block of
memory to each PTE table, the PTE table itself only spans half that,
the remainder being dedicated to hardware-interpreted structures. As the
hardware structure must be contiguous, wider ptes can't be used. So the
core-visible PTE table only spans PAGE_SIZE/2 bytes, violating the
assumption. This corrects masking and scaling done in ptep_to_address().

I'm aware hugh and andrea's patches address this by blowing away the
dependence on this address calculation altogether, so if anything this
should be considered a stopgap measure if not dropped on account of the
imminent merging of hugh and/or andrea's code.


-- wli


Index: wli-2.6.5-mm6/include/asm-generic/rmap.h
===================================================================
--- wli-2.6.5-mm6.orig/include/asm-generic/rmap.h 2004-04-03 19:37:24.000000000 -0800
+++ wli-2.6.5-mm6/include/asm-generic/rmap.h 2004-04-16 12:21:18.000000000 -0700
@@ -57,7 +57,8 @@
{
struct page * page = kmap_atomic_to_page(ptep);
unsigned long low_bits;
- low_bits = ((unsigned long)ptep & ~PAGE_MASK) * PTRS_PER_PTE;
+ low_bits = ((unsigned long)ptep & (PTRS_PER_PTE*sizeof(pte_t) - 1))
+ * (PAGE_SIZE/sizeof(pte_t));
return page->index + low_bits;
}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/