Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 7 Nov 2018 20:03:40 +0000
From: "Edgecombe, Rick P" <>
To: "" <>
CC: "" <>,
	"" <>, ""
	<>, "" <>,
	"" <>, ""
	<>, "" <>,
	"" <>, ""
	<>, "" <>,
	"" <>, ""
	<>, "" <>,
	"" <>,
	"Hansen, Dave" <>
Subject: Re: [PATCH v8 0/4] KASLR feature to randomize each loadable module

On Tue, 2018-11-06 at 13:04 -0800, Andrew Morton wrote:
> On Fri,  2 Nov 2018 12:25:16 -0700 Rick Edgecombe <>
> wrote:
> > This is V8 of the "KASLR feature to randomize each loadable module"
> > patchset.
> > The purpose is to increase the randomization and also to make the modules
> > randomized in relation to each other instead of just the base, so that if
> > one
> > module leaks the location of the others can't be inferred.
> I'm not seeing any info here which explains why we should add this to
> Linux.
> What is the end-user value?  What problems does it solve?  Are those
> problems real or theoretical?  What are the exploit scenarios and how
> realistic are they?  etcetera, etcetera.  How are we to decide to buy
> this thing if we aren't given a glossy brochure?
Hi Andrew,

Thanks for taking a look! The first version had a proper write up, but now the
details are spread out over 8 versions. I'll send out another version with it
all in one place.

The short version is that today the RANDOMIZE_BASE feature randomizes the base
address where the module allocations begin with 10 bits of entropy. From here,
a highly deterministic algorithm allocates space for the modules as they are 
loaded and un-loaded. If an attacker can predict the order and identities for
modules that will be loaded, then a single text address leak can give the
attacker access to the locations of all the modules. 

So this is trying to prevent the same class of attacks as the existing KASLR,
like control flow manipulation and now also making it harder/longer to find
speculative execution gadgets. It increases the number of possible
positions 128X, and with that amount of randomness per module instead of for all

> > There is a small allocation performance degradation versus v7 as a
> > trade off, but it is still faster on average than the existing
> > algorithm until >7000 modules.
> lol.  How did you test 7000 modules?  Using the selftest code?

Yes, this is with simulations using the included kselftest code with sizes
extracted from the x86_64 in-tree modules. Supporting 7000 kernel modules is not
the intention though, instead it's trying to accommodate 7000 allocations in the
module space. So also eBPF JIT, classic BPF socket filter JIT, kprobes, etc.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.