Re: [PATCH] mm: larger stack guard gap, between vmas

From: Ben Hutchings
Date: Wed Jul 05 2017 - 12:59:34 EST


On Wed, Jul 05, 2017 at 04:25:00PM +0100, Ben Hutchings wrote:
[...]
> Soemthing I noticed is that Java doesn't immediately use MAP_FIXED.
> Look at os::pd_attempt_reserve_memory_at(). If the first, hinted,
> mmap() doesn't return the hinted address it then attempts to allocate
> huge areas (I'm not sure how intentional this is) and unmaps the
> unwanted parts. Then os::workaround_expand_exec_shield_cs_limit() re-
> mmap()s the wanted part with MAP_FIXED. If this fails at any point it
> is not a fatal error.
>
> So if we change vm_start_gap() to take the stack limit into account
> (when it's finite) that should neutralise
> os::workaround_expand_exec_shield_cs_limit(). I'll try this.

I ended up with the following two patches, which seem to deal with
both the Java and Rust regressions. These don't touch the
stack-grows-up paths at all because Rust doesn't run on those
architectures and the Java weirdness is i386-specific.

They definitely need longer commit messages and comments, but aside
from that do these look reasonable?

Ben.

Subject: [1/2] mmap: Skip a single VM_NONE mapping when checking the stack gap

Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
mm/mmap.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index a5e3dcd75e79..c7906ae1a7a1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2323,11 +2323,16 @@ int expand_downwards(struct vm_area_struct *vma,
if (error)
return error;

- /* Enforce stack_guard_gap */
+ /*
+ * Enforce stack_guard_gap. Some applications allocate a VM_NONE
+ * mapping just below the stack, which we can safely ignore.
+ */
gap_addr = address - stack_guard_gap;
if (gap_addr > address)
return -ENOMEM;
prev = vma->vm_prev;
+ if (prev && !(prev->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)))
+ prev = prev->vm_prev;
if (prev && prev->vm_end > gap_addr) {
if (!(prev->vm_flags & VM_GROWSDOWN))
return -ENOMEM;

Subject: [2/2] mmap: Avoid mapping anywhere within the full stack extent
if finite

Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
include/linux/mm.h | 9 ++++-----
mm/mmap.c | 19 +++++++++++++++++++
2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f543a47fc92..2240a0505072 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2223,15 +2223,14 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m
return vma;
}

+unsigned long __vm_start_gap(struct vm_area_struct *vma);
+
static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
{
unsigned long vm_start = vma->vm_start;

- if (vma->vm_flags & VM_GROWSDOWN) {
- vm_start -= stack_guard_gap;
- if (vm_start > vma->vm_start)
- vm_start = 0;
- }
+ if (vma->vm_flags & VM_GROWSDOWN)
+ vm_start = __vm_start_gap(vma);
return vm_start;
}

diff --git a/mm/mmap.c b/mm/mmap.c
index c7906ae1a7a1..f8131a94e56e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2307,6 +2307,25 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
}
#endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */

+unsigned long __vm_start_gap(struct vm_area_struct *vma)
+{
+ unsigned long stack_limit =
+ current->signal->rlim[RLIMIT_STACK].rlim_cur;
+ unsigned long vm_start;
+
+ if (stack_limit != RLIM_INFINITY &&
+ vma->vm_end - vma->vm_start < stack_limit)
+ vm_start = vma->vm_end - PAGE_ALIGN(stack_limit);
+ else
+ vm_start = vma->vm_start;
+
+ vm_start -= stack_guard_gap;
+ if (vm_start > vma->vm_start)
+ vm_start = 0;
+
+ return vm_start;
+}
+
/*
* vma is the first one with address < vma->vm_start. Have to extend vma.
*/

--
Ben Hutchings
For every complex problem
there is a solution that is simple, neat, and wrong.

Attachment: signature.asc
Description: Digital signature