Re: "x86-64, mm: Put early page table high" causes crash on Xen

From: Konrad Rzeszutek Wilk
Date: Wed Mar 02 2011 - 12:07:37 EST


On Wed, Mar 02, 2011 at 03:23:48PM +0000, Stefano Stabellini wrote:
> On Tue, 1 Mar 2011, Yinghai Lu wrote:
> > We do need map_low_page ( BTW, that name is totally misleading...)
> >
> > the reason is we put page_table high and at that time is not under max_pfn_mapped. (aka not mapped).
> >
> > So have to use
> > adr = early_memremap(phys & PAGE_MASK, PAGE_SIZE);
> > to early map it and Read/Write to it.
>
> I think I have figured out a way to update the logic of mask_rw_pte to
> account for the new way of allocating kernel pagetables.
> The appended patch fix the boot crash for me.
>
> ---
>
> xen: update mask_rw_pte after kernel page tables init changes
>
> Already existing kernel page table pages can now be mapped using
> early_ioremap too so we need to update mask_rw_pte to make sure these
> pages are still mapped RO.
> We do that by mapping RO all the pages mapped using early_ioremap apart
> from the last one that has been allocated because it is not a page table
> page yet (it has been hooked into the page tables yet).
^- has not?

..this is b/c the initial_kernel_mapping and its family of calls update
the pgt_buf_end (used to be called e820_table_end), _before__
it calls the Xen MMU to set the PTE entries and then after it is done
update the PMD?. Hence the simple check to see if the PFN is the _old_
value of the pgt_buf_end and if so skip altering the mapping from RW to RO
and leave them be?

On subsequent passes we fall within the first conditional
and update the PTE to RO? When is that pass done?


You also might want to mention the git commits that inspired this patch
and include the nice description you provided in the first email of this
thread.


>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 62192cd..2ff68be 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1440,10 +1440,12 @@ static __init pte_t mask_rw_pte(pte_t *ptep, pte_t pte)
> /*
> * If the new pfn is within the range of the newly allocated
> * kernel pagetable, and it isn't being mapped into an
> - * early_ioremap fixmap slot, make sure it is RO.
> + * early_ioremap fixmap slot as a freshly allocated page, make sure
> + * it is RO.
> */
> - if (!is_early_ioremap_ptep(ptep) &&
> - pfn >= pgt_buf_start && pfn < pgt_buf_end)
> + if (((!is_early_ioremap_ptep(ptep) &&
> + pfn >= pgt_buf_start && pfn < pgt_buf_end)) ||
> + (is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1)))
> pte = pte_wrprotect(pte);
>
> return pte;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/