Date: Fri, 27 Nov 2015 12:23:58 -0800 From: Kees Cook <keescook@...omium.org> To: Quentin Casasnovas <quentin.casasnovas@...cle.com> Cc: "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com> Subject: Re: status: GRKERNSEC_KSTACKOVERFLOW On Wed, Nov 25, 2015 at 3:45 PM, Quentin Casasnovas <quentin.casasnovas@...cle.com> wrote: > On Tue, Nov 24, 2015 at 11:10:09AM -0800, Kees Cook wrote: >> Hi, >> > > Hi Kees, > >> I just wanted to check in and see how progress was going on the stack >> overflow feature. Anything we can help with? >> > > Sorry for not following up on this, I've been busy and haven't had the time > to finish it properly. I've pushed an initial WIP break up of the > KSTACK_OVERFLOW feature on my github: > > https://github.com/casasnovas/linux/tree/quentin-split-kstackoverflow Great! Thanks for the update! > This is far from being complete though, and hasn't been cleaned at all. I > didn't share it earlier because I don't think I fully understand it and > haven't tested it yet. In "short", there's mention of guard pages in the > Kconfig help: > > If you say Y here, the kernel's process stacks will be allocated with > vmalloc instead of the kernel's default allocator. This introduces guard > ^^^^^ > pages that in combination with the alloca checking of the STACKLEAK > ^^^^^ > feature prevents all forms of kernel process stack overflow abuse. Note > that this is different from kernel stack buffer overflows. """ > > And I couldn't find anything about it in the code. Maybe it's just coming > from a mis-interpretation of the above text, but I was expecting this to > mean there would be a PROT_NONE guard page after the end of the stack, so > that read/writes below it could be trapped. It could also be that I missed > some parts in my initial break-up or simply am missing something. > > It should also be noted that I did not find that the struct thread_info > (which is stuffed at the end of the stack) was protected in any way either. > So even if a write/read _below_ the stack could still be trapped if nothing > is currently mapped there, it looks like deep stack usage could still > overflow it and go unoticed. Here again, I didn't spend a lot of time on > this and it might just be that I'm missing something. > > In the very unlikely event where I didn't miss anything and the struct > thread_info can still be overflown and there isn't any guard page, maybe we > can improve on the current KSTACK_OVERFLOW feature by putting the struct > thread_info on a different page than the kernel stack, and not vmap() it > like the rest of the stack pages, but instead map a PROT_NONE page there. > That would mean the struct thread_info can still be accessed by using its > lowmem mapping (i.e. legit usage from the kernel) but not by deep kernel > stack usage. Maybe the cost of adding an extra page per kernel stack is > too high though. > > Finally, I'd like to find a real life example where one could overflow the > kernel stack, so it can be used to test the feature (once properly split) > and show it can happen, for real, before sending real patches for review. > I might have found such a case because of what appears to me like a gcc > bug, more on this in another follow up :) I think at least some of the past flaws have been related to having dynamic stack allocations, where an attacker could control the size of the object. And "alloca" overflow style attack. I will see about getting an lkdtm test written too. Though testing this when there is no guard page seems like a bad idea. I'll have to get creative on the overwrite. Hmm. > tldr; Initial break up done, not even compile-tested yet and am probably > missing some bits. Might have found a code path to trigger a kernel stack > overflow due to a gcc weirdness, I need to investigate it. Yay bugs! :) > > Any comments appreciated :) > > Thanks, > Quentin -Kees -- Kees Cook Chrome OS & Brillo Security
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.