Prev: BUG: IPv6 stops working after a while, needs ip ne del command to reset
Next: [3/3] x86: dont send SIGBUS for kernel page faults
From: Greg KH on 13 Aug 2010 17:50 2.6.27-stable review patch. If anyone has any objections, please let us know. ------------------ From: Linus Torvalds <torvalds(a)linux-foundation.org> commit 320b2b8de12698082609ebbc1a17165727f4c893 upstream. This is a rather minimally invasive patch to solve the problem of the user stack growing into a memory mapped area below it. Whenever we fill the first page of the stack segment, expand the segment down by one page. Now, admittedly some odd application might _want_ the stack to grow down into the preceding memory mapping, and so we may at some point need to make this a process tunable (some people might also want to have more than a single page of guarding), but let's try the minimal approach first. Tested with trivial application that maps a single page just below the stack, and then starts recursing. Without this, we will get a SIGSEGV _after_ the stack has smashed the mapping. With this patch, we'll get a nice SIGBUS just as the stack touches the page just above the mapping. Requested-by: Keith Packard <keithp(a)keithp.com> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)suse.de> --- mm/memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) --- a/mm/memory.c +++ b/mm/memory.c @@ -2396,6 +2396,26 @@ out_nomap: } /* + * This is like a special single-page "expand_downwards()", + * except we must first make sure that 'address-PAGE_SIZE' + * doesn't hit another vma. + * + * The "find_vma()" will do the right thing even if we wrap + */ +static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address) +{ + address &= PAGE_MASK; + if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) { + address -= PAGE_SIZE; + if (find_vma(vma->vm_mm, address) != vma) + return -ENOMEM; + + expand_stack(vma, address); + } + return 0; +} + +/* * We enter with non-exclusive mmap_sem (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. * We return with mmap_sem still held, but pte unmapped and unlocked. @@ -2408,6 +2428,9 @@ static int do_anonymous_page(struct mm_s spinlock_t *ptl; pte_t entry; + if (check_stack_guard_page(vma, address) < 0) + return VM_FAULT_SIGBUS; + /* Allocate our own private page. */ pte_unmap(page_table); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |