Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 16 Jun 2016 20:58:07 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Andy Lutomirski <luto@...nel.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>, 
	Borislav Petkov <bp@...en8.de>, Nadav Amit <nadav.amit@...il.com>, Kees Cook <keescook@...omium.org>, 
	Brian Gerst <brgerst@...il.com>, 
	"kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>, 
	Linus Torvalds <torvalds@...ux-foundation.org>, Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH 00/13] Virtually mapped stacks with guard pages (x86, core)

On Wed, Jun 15, 2016 at 11:05 PM, Heiko Carstens
<heiko.carstens@...ibm.com> wrote:
> On Wed, Jun 15, 2016 at 05:28:22PM -0700, Andy Lutomirski wrote:
>> Since the dawn of time, a kernel stack overflow has been a real PITA
>> to debug, has caused nondeterministic crashes some time after the
>> actual overflow, and has generally been easy to exploit for root.
>>
>> With this series, arches can enable HAVE_ARCH_VMAP_STACK.  Arches
>> that enable it (just x86 for now) get virtually mapped stacks with
>> guard pages.  This causes reliable faults when the stack overflows.
>>
>> If the arch implements it well, we get a nice OOPS on stack overflow
>> (as opposed to panicing directly or otherwise exploding badly).  On
>> x86, the OOPS is nice, has a usable call trace, and the overflowing
>> task is killed cleanly.
>
> Do you have numbers which reflect the performance impact of this change?
>

It seems to add ~1.5µs per thread creation/join pair, which is around
15% overhead.  I *think* the major cost is that vmalloc calls
alloc_kmem_pages_node once per page rather than using a higher-order
block if available.

Anyway, if anyone wants this to become faster, I think the way to do
it would be to ask some friendly mm folks to see if they can speed up
vmalloc.  I don't really want to dig in to the guts of the page
allocator.  My instinct would be to add a new interface
(GFP_SMALLER_OK?) to ask the page allocator for a high-order
allocation such that, if a high-order block is not immediately
available (on the freelist) then it should fall back to a smaller
allocation rather than working hard to get a high-order allocation.
Then vmalloc could use this, and vfree could free pages in blocks
corresponding to whatever orders it got the pages in, thus avoiding
the need to merge all the pages back together.

There's another speedup available: actually reuse allocations.  We
could keep a very small freelist of vmap_areas with their associated
pages so we could reuse them.  (We can't efficiently reuse a vmap_area
without its backing pages because we need to flush the TLB in the
middle if we do that.)

--Andy

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.