Re: [PATCH v2] mm: align larger anonymous mappings on THP boundaries

From: Yang Shi
Date: Wed Jan 17 2024 - 18:33:03 EST


On Wed, Jan 17, 2024 at 9:40 AM Kees Cook <keescook@xxxxxxxxxxxx> wrote:
>
> On Tue, Jan 16, 2024 at 02:30:36PM -0800, Suren Baghdasaryan wrote:
> > On Tue, Jan 16, 2024 at 2:25 PM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > >
> > > On Tue, Jan 16, 2024 at 1:58 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Jan 16, 2024 at 12:56 PM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > > > >
> > > > > On Tue, Jan 16, 2024 at 11:16 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Tue, Jan 16, 2024 at 4:09 AM Jiri Slaby <jirislaby@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On 16. 01. 24, 12:53, Jiri Slaby wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > On 09. 08. 22, 20:24, Rik van Riel wrote:
> > > > > > > >> Align larger anonymous memory mappings on THP boundaries by
> > > > > > > >> going through thp_get_unmapped_area if THPs are enabled for
> > > > > > > >> the current process.
> > > > > > > >>
> > > > > > > >> With this patch, larger anonymous mappings are now THP aligned.
> > > > > > > >> When a malloc library allocates a 2MB or larger arena, that
> > > > > > > >> arena can now be mapped with THPs right from the start, which
> > > > > > > >> can result in better TLB hit rates and execution time.
> > > > > > > >
> > > > > > > > This appears to break 32bit processes on x86_64 (at least). In
> > > > > > > > particular, 32bit kernel or firefox builds in our build system.
> > > > > > > >
> > > > > > > > Reverting this on top of 6.7 makes it work again.
> > > > > > > >
> > > > > > > > Downstream report:
> > > > > > > > https://bugzilla.suse.com/show_bug.cgi?id=1218841
> > > > > > > >
> > > > > > > > So running:
> > > > > > > > pahole -J --btf_gen_floats -j --lang_exclude=rust
> > > > > > > > --skip_encoding_btf_inconsistent_proto --btf_gen_optimized tmp_vmlinux.btf
> > > > > > > >
> > > > > > > > crashes or errors out with some random errors:
> > > > > > > > [182671] STRUCT idr's field 'idr_next' offset=128 bit_size=0 type=181346
> > > > > > > > Error emitting field
> > > > > > > >
> > > > > > > > strace shows mmap() fails with ENOMEM right before the errors:
> > > > > > > > 1223 mmap2(NULL, 5783552, PROT_READ|PROT_WRITE,
> > > > > > > > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 <unfinished ...>
> > > > > > > > ...
> > > > > > > > 1223 <... mmap2 resumed>) = -1 ENOMEM (Cannot allocate
> > > > > > > > memory)
> > > > > > > >
> > > > > > > > Note the .tmp_vmlinux.btf above can be arbitrary, but likely large
> > > > > > > > enough. For reference, one is available at:
> > > > > > > > https://decibel.fi.muni.cz/~xslaby/n/btf
> > > > > > > >
> > > > > > > > Any ideas?
> > > > > > >
> > > > > > > This works around the problem, of course (but is a band-aid, not a fix):
> > > > > > >
> > > > > > > --- a/mm/mmap.c
> > > > > > > +++ b/mm/mmap.c
> > > > > > > @@ -1829,7 +1829,7 @@ get_unmapped_area(struct file *file, unsigned long
> > > > > > > addr, unsigned long len,
> > > > > > > */
> > > > > > > pgoff = 0;
> > > > > > > get_area = shmem_get_unmapped_area;
> > > > > > > - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> > > > > > > + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
> > > > > > > !in_32bit_syscall()) {
> > > > > > > /* Ensures that larger anonymous mappings are THP
> > > > > > > aligned. */
> > > > > > > get_area = thp_get_unmapped_area;
> > > > > > > }
> > > > > > >
> > > > > > >
> > > > > > > thp_get_unmapped_area() does not take care of the legacy stuff...
> > > > > >
> > > > > > This change also affects the entropy of allocations. With this patch
> > > > > > Android test [1] started failing and it requires only 8 bits of
> > > > > > entropy. The feedback from our security team:
> > > > > >
> > > > > > 8 bits of entropy is already embarrassingly low, but was necessary for
> > > > > > 32 bit ARM targets with low RAM at the time. It's definitely not
> > > > > > acceptable for 64 bit targets.
> > > > >
> > > > > Thanks for the report. Is it 32 bit only or 64 bit is also impacted?
> > > > > If I understand the code correctly, it expects the address allocated
> > > > > by malloc() is kind of randomized, right?
> > > >
> > > > Yes, correct, the test expects a certain level of address randomization.
> > > > The test failure was reported while running kernel_virt_x86_64 target
> > > > (Android emulator on x86), so it does impact 64bit targets.
> > >
> > > IIUC this breaks the "expectation" for randomized addresses returned
> > > by malloc(), but it doesn't break any real Android application, right?
> > > So this is a security concern instead of a real regression.
> >
> > How is making a system move vulnerabile not a real regression?
> >
> > >
> > > I think we can make this opt-in with a knob. Anyone who outweighs
> > > security could opt this feature out. However I'm wondering whether
> > > Android should implement a general address randomization mechanism
> > > instead of depending on "luck" if you do care about it.
> >
> > This is not depending on luck. This is checking for possible
> > vulnerabilities in the system.
> > I admit I'm not a security expert, so I'm looping in Jeff and Kees to chime in.
>
> Hi!
>
> Just to chime in, but reduction in ASLR entropy is absolutely a
> regression. This is userspace visible (via /proc/sys/kernel/randomize_va_space,
> /proc/sys/vm/mmap_rnd*_bits) with expectations that it work as
> advertised. So, while 32-bit might be already ASLR-weak, we don't want
> to make things super bad nor break ASLR in compat mode under 64-bit
> systems.
>
> Having an opt-in sounds reasonable, but we need to wire it to the ASLR
> sysctls in some way so nothing lying about the ASLR entropy.

Thanks for the explanation. IIUC the randomiza_va_space and
mmap_rnd_bits randomize the mmap_base and start_brk for each exec()
call. So the heap allocation is randomized. But it seems the formula
doesn't take into account huge page. ARM64 adjusts the mmap_rnd_bits
based on page size.

I did a simple test, which conceptually does:
1. call mmap to allocate 8M heap
2. print out the allocated address
3. run the program 1000 times (launch/exit/re-launch)
4. check how many unique addresses

With the default config on my arm64 VM (mmap_rnd_bits is 18), I saw
134 unique addresses. Without the patch, I saw 945 unique addresses.
So I think the test could replicate what your test does.

When I increased the mmap_rnd_bits to 24, I saw 988 unique addresses
with the patch. x86_64 should have 28 bits by default, it should
randomize the address quite well. I don't know why you still saw this,
or you have a different setting for mmap_rnd_bits?

I'm wondering whether we should take into account huge page alignment
for mmap_rnd_bits. And I think this is a huge page common problem, we
have file mapping huge page aligned as well.

32 bit is easy, I think I can just make thp_get_unmapped_area() a
no-op on 32 bit system.

>
> -Kees
>
> --
> Kees Cook