Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 1 Mar 2018 16:52:12 +0300
From: Ilya Smith <blackzert@...il.com>
To: Kees Cook <keescook@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Dan Williams <dan.j.williams@...el.com>,
 Michal Hocko <mhocko@...e.com>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 Jan Kara <jack@...e.cz>,
 Jerome Glisse <jglisse@...hat.com>,
 Hugh Dickins <hughd@...gle.com>,
 Matthew Wilcox <willy@...radead.org>,
 Helge Deller <deller@....de>,
 Andrea Arcangeli <aarcange@...hat.com>,
 Oleg Nesterov <oleg@...hat.com>,
 Linux-MM <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>,
 Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [RFC PATCH] Randomization of address chosen by mmap.


> On 28 Feb 2018, at 22:54, Kees Cook <keescook@...omium.org> wrote:
> 
> I was trying to understand the target entropy level, and I'm worried
> it's a bit biased. For example, if the first allocation lands at 1/4th
> of the memory space, the next allocation (IIUC) has a 50% chance of
> falling on either side of it. If it goes on the small side, it then
> has much less entropy than if it had gone on the other side. I think
> this may be less entropy than choosing a random address and just
> seeing if it fits or not. Dealing with collisions could be done either
> by pushing the address until it doesn't collide or picking another
> random address, etc. This is probably more expensive, though, since it
> would need to walk the vma tree repeatedly. Anyway, I was ultimately
> curious about your measured entropy and what alternatives you
> considered.

Let me please start with the options we have here. 
Let's pretend we need to choose random address from free memory pool. Let’s 
pretend we have an array of gaps sorted by size of gap descending. First we 
find the highest index satisfies requested length. For each suitable gap (with 
less index) we count how many pages in this gap satisfies request. And compute 
total count of pages satisfies request. Now we get random by module of total 
number. Subtracting from this value count of suitable gap pages for gaps until 
this value greater we will find needed gap and offset inside it. Add gap start 
to offset we will randomly choose suitable address.
In this scheme we have to keep array of gaps. Each time address space is 
changed we have to keep the gaps array consistent and apply this changes. It is 
a very big overhead on any change.

Pure random looks really expensive. Lets try to improve something.

We can’t just choose random address and try do it again and again until we find 
something - this approach has non-deterministic behaviour. Nobody knows when it 
stops. Same if we try to walk tree in random direction.

We can walk tree and try to build array of suitable gaps and choose something 
from there. In my current approach (proof of concept) length of array is 1 and 
thats why last gaps would be chosen with more probability. I’m agree. It is 
possible to increase array spending some memory. For example struct mm may have 
to array of 1024 gaps. We do the same, walk tree and randomly fill this array ( 
everything locked under write_mem semaphore). When we filled it or walked whole 
tree - choose gap randomly. What do you think about it?

Thanks,
Ilya



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.