Re: New version of pageattr caching conflict fix for 2.4

From: Andrea Arcangeli (
Date: Fri Jun 14 2002 - 12:31:33 EST

On Fri, Jun 14, 2002 at 06:13:28PM +0200, Andi Kleen wrote:
> + /* Hopefully not be mapped anywhere else. */
> + if (page >= highmem_start_page)
> + return 0;
> +#endif

there's no hope here. If you don't want to code it right because nobody
is exercising such path and flush both any per-cpu kmap-atomic slot and
the page->virtual, please place a BUG() there or any more graceful
failure notification.

> +int change_page_attr(struct page *page, int numpages, pgprot_t prot)
> +{

this API not the best, again I would recommend something on these lines:

        struct page ** physical_alias_alloc_pages(int numpages, unsigned int gfp_mask);
        void physical_alias_free_pages(struct page **);

the semantics are trivial, agp_gart_alloc_pages() will return an array
of numpages entries (allocated with kmalloc), that points to all the
pages that are been prepared from the architectural call (of course
those two func are in arch/) for the generation of a physical alias.

This allows the arch to allocate the pages with order >0, this way the
number of fragmented 4/2m pages will be reduced.

The only requirement is that you know the number of pages that you're
going to allocate, if some doesn't we can add also this additional api:

        struct page * physical_alias_alloc_page(unsigned int gfp_mask);
        void physical_alias_free_pages(struct page *);

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Sat Jun 15 2002 - 22:00:32 EST