Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 08 Jul 2016 20:19:58 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Kees Cook <keescook@...omium.org>, "kernel-hardening\@lists.openwall.com" <kernel-hardening@...ts.openwall.com>
Cc: Jan Kara <jack@...e.cz>, Catalin Marinas <catalin.marinas@....com>, Will Deacon <will.deacon@....com>, Linux-MM <linux-mm@...ck.org>, sparclinux <sparclinux@...r.kernel.org>, linux-ia64@...r.kernel.org, Christoph Lameter <cl@...ux.com>, Andrea Arcangeli <aarcange@...hat.com>, linux-arch <linux-arch@...r.kernel.org>, "x86\@kernel.org" <x86@...nel.org>, Russell King <linux@...linux.org.uk>, PaX Team <pageexec@...email.hu>, Borislav Petkov <bp@...e.de>, lin <ux-arm-kernel@...ts.infradead.org>, Mathias Krause <minipli@...glemail.com>, Fenghua Yu <fenghua.yu@...el.com>, Rik van Riel <riel@...hat.com>, David Rientjes <rientjes@...gle.com>, Tony Luck <tony.luck@...el.com>, Andy Lutomirski <luto@...nel.org>, Joonsoo Kim <iamjoonsoo.kim@....com>, Dmitry Vyukov <dvyukov@...gle.com>, Laura Abbott <labbott@...oraproject.org>, Brad Spengler <spender@...ecurity.net>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, LKML <linux-kernel@...r.kernel.org>, Pekka Enberg <penberg@...nel.org>, Casey Schauf
 ler <casey@...aufler-ca.com>, Andrew Morton <akpm@...ux-foundation.org>, "linuxppc-dev\@lists.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>, "David S. Miller" <davem@...emloft.net>
Subject: Re: Re: [PATCH 9/9] mm: SLUB hardened usercopy support

Kees Cook <keescook@...omium.org> writes:
> On Thu, Jul 7, 2016 at 12:35 AM, Michael Ellerman <mpe@...erman.id.au> wrote:
>> I gave this a quick spin on powerpc, it blew up immediately :)
>
> Wheee :) This series is rather easy to test: blows up REALLY quickly
> if it's wrong. ;)

Better than subtle race conditions which is the usual :)

>> diff --git a/mm/slub.c b/mm/slub.c
>> index 0c8ace04f075..66191ea4545a 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
>>         /* Find object. */
>>         s = page->slab_cache;
>>
>> +       /* Subtract red zone if enabled */
>> +       ptr = restore_red_left(s, ptr);
>> +
>
> Ah, interesting. Just to make sure: you've built with
> CONFIG_SLUB_DEBUG and either CONFIG_SLUB_DEBUG_ON or booted with
> either slub_debug or slub_debug=z ?

Yeah built with CONFIG_SLUB_DEBUG_ON, and booted with and without slub_debug
options.

> Thanks for the slub fix!
>
> I wonder if this code should be using size_from_object() instead of s->size?

Hmm, not sure. Who's SLUB maintainer? :)

I was modelling it on the logic in check_valid_pointer(), which also does the
restore_red_left(), and then checks for % s->size:

static inline int check_valid_pointer(struct kmem_cache *s,
				struct page *page, void *object)
{
	void *base;

	if (!object)
		return 1;

	base = page_address(page);
	object = restore_red_left(s, object);
	if (object < base || object >= base + page->objects * s->size ||
		(object - base) % s->size) {
		return 0;
	}

	return 1;
}

cheers

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.