Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 20 Jun 2018 15:33:44 -0700
From: Kees Cook <keescook@...omium.org>
To: Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, 
	"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>, 
	Linux-MM <linux-mm@...ck.org>, Kernel Hardening <kernel-hardening@...ts.openwall.com>, 
	kristen Accardi <kristen.c.accardi@...el.com>, Dave Hansen <dave.hansen@...el.com>, 
	"Van De Ven, Arjan" <arjan.van.de.ven@...el.com>
Subject: Re: [PATCH 0/3] KASLR feature to randomize each loadable module

On Wed, Jun 20, 2018 at 3:09 PM, Rick Edgecombe
<rick.p.edgecombe@...el.com> wrote:
> This patch changes the module loading KASLR algorithm to randomize the position
> of each module text section allocation with at least 18 bits of entropy in the
> typical case. It used on x86_64 only for now.

Very cool! Thanks for sending the series. :)

> Today the RANDOMIZE_BASE feature randomizes the base address where the module
> allocations begin with 10 bits of entropy. From here, a highly deterministic
> algorithm allocates space for the modules as they are loaded and un-loaded. If
> an attacker can predict the order and identities for modules that will be
> loaded, then a single text address leak can give the attacker access to the

nit: "text address" -> "module text address"

> So the defensive strength of this algorithm in typical usage (<800 modules) for
> x86_64 should be at least 18 bits, even if an address from the random area
> leaks.

And most systems have <200 modules, really. I have 113 on a desktop
right now, 63 on a server. So this looks like a trivial win.

> As for fragmentation, this algorithm reduces the average number of modules that
> can be loaded without an allocation failure by about 6% (~17000 to ~16000)
> (p<0.05). It can also reduce the largest module executable section that can be
> loaded by half to ~500MB in the worst case.

Given that we only have 8312 tristate Kconfig items, I think 16000
will remain just fine. And even large modules (i915) are under 2MB...

> The new __vmalloc_node_try_addr function uses the existing function
> __vmalloc_node_range, in order to introduce this algorithm with the least
> invasive change. The side effect is that each time there is a collision when
> trying to allocate in the random area a TLB flush will be triggered. There is
> a more complex, more efficient implementation that can be used instead if
> there is interest in improving performance.

The only time when module loading speed is noticeable, I would think,
would be boot time. Have you done any boot time delta analysis? I
wouldn't expect it to change hardly at all, but it's probably a good
idea to actually test it. :)

Also: can this be generalized for use on other KASLRed architectures?
For example, I know the arm64 module randomization is pretty similar
to x86.

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.