Re: [PATCH v2 01/12] x86/ioremap: Fix page aligned size calculation in __ioremap_caller()

From: Dave Hansen
Date: Fri Nov 11 2022 - 19:11:46 EST


On 11/10/22 22:21, Michael Kelley wrote:
> If applying the PHYSICAL_PAGE_MASK to the phys_addr argument causes
> upper bits to be masked out, the re-calculation of size to account for
> page alignment is incorrect because the same bits are not masked out
> in last_addr.
>
> Fix this by masking the page aligned last_addr as well.

This makes sense at first glance.

How did you notice this? What is the impact to users? Did the bug
actually cause you some trouble or was it by inspection? Do you have a
sense of how many folks might be impacted? Any thoughts on how it
lasted for 14+ years?

For the functionality of the mapping, I guess 'size' doesn't really
matter because even a 1-byte 'size' will map a page. The other fallout
would be from memtype_reserve() reserving too little. But, that's
unlikely to matter for small mappings because even though:

ioremap(0x1800, 0x800);

would end up just reserving 0x1000->0x1800, it still wouldn't allow

ioremap(0x1000, 0x800);

to succeed because *both* of them would end up trying to reserve the
beginning of the page. Basically, the first caller effectively reserves
the whole page and any second user will fail.

So the other place it would matter would be for mappings that span two
pages, say:

ioremap(0x1fff, 0x2)

But I guess those aren't very common. Most large ioremap() callers seem
to already have base and size page-aligned.

Anyway, sorry to make so much of a big deal about a one-liner. But,
these decade-old bugs really make me wonder how they stuck around for so
long.

I'd be curious if you thought about this too while putting together this
fox.