Re: [RFC v2] arm:extend the reserved mrmory for initrd to be page aligned

From: Catalin Marinas
Date: Fri Dec 05 2014 - 12:07:59 EST


On Fri, Dec 05, 2014 at 12:05:06PM +0000, Will Deacon wrote:
> On Thu, Dec 04, 2014 at 12:03:05PM +0000, Catalin Marinas wrote:
> > On Mon, Sep 15, 2014 at 12:33:25PM +0100, Russell King - ARM Linux wrote:
> > > On Mon, Sep 15, 2014 at 07:07:20PM +0800, Wang, Yalin wrote:
> > > > @@ -636,6 +646,11 @@ static int keep_initrd;
> > > > void free_initrd_mem(unsigned long start, unsigned long end)
> > > > {
> > > > if (!keep_initrd) {
> > > > + if (start == initrd_start)
> > > > + start = round_down(start, PAGE_SIZE);
> > > > + if (end == initrd_end)
> > > > + end = round_up(end, PAGE_SIZE);
> > > > +
> > > > poison_init_mem((void *)start, PAGE_ALIGN(end) - start);
> > > > free_reserved_area((void *)start, (void *)end, -1, "initrd");
> > > > }
> > >
> > > is the only bit of code you likely need to achieve your goal.
> > >
> > > Thinking about this, I think that you are quite right to align these.
> > > The memory around the initrd is defined to be system memory, and we
> > > already free the pages around it, so it *is* wrong not to free the
> > > partial initrd pages.
> >
> > Actually, I think we have a problem, at least on arm64 (raised by Peter
> > Maydell). There is no guarantee that the page around start/end of initrd
> > is free, it may contain the dtb for example. This is even more obvious
> > when we have a 64KB page kernel (the boot loader doesn't know the page
> > size that the kernel is going to use).
> >
> > The bug was there before as we had poison_init_mem() already (not it
> > disappeared since free_reserved_area does the poisoning).
> >
> > So as a quick fix I think we need the rounding the other way (and in the
> > general case we probably lose a page at the end of initrd):
> >
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 494297c698ca..39fd080683e7 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -335,9 +335,9 @@ void free_initrd_mem(unsigned long start, unsigned long end)
> > {
> > if (!keep_initrd) {
> > if (start == initrd_start)
> > - start = round_down(start, PAGE_SIZE);
> > + start = round_up(start, PAGE_SIZE);
> > if (end == initrd_end)
> > - end = round_up(end, PAGE_SIZE);
> > + end = round_down(end, PAGE_SIZE);
> >
> > free_reserved_area((void *)start, (void *)end, 0, "initrd");
> > }
> >
> > A better fix would be to check what else is around the start/end of
> > initrd.
>
> Care to submit this as a proper patch? We should at least fix Peter's issue
> before doing things like extending headers, which won't work for older
> kernels anyway.

Quick fix is the revert of the whole patch, together with removing
PAGE_ALIGN(end) in poison_init_mem() on arm32. If Russell is ok with
this patch, we can take it via the arm64 tree, otherwise I'll send you a
partial revert only for the arm64 part.

-------------8<-----------------------