Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 5 Mar 2018 22:27:32 +0300
From: Ilya Smith <blackzert@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Daniel Micay <danielmicay@...il.com>,
 Kees Cook <keescook@...omium.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Dan Williams <dan.j.williams@...el.com>,
 Michal Hocko <mhocko@...e.com>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 Jan Kara <jack@...e.cz>,
 Jerome Glisse <jglisse@...hat.com>,
 Hugh Dickins <hughd@...gle.com>,
 Helge Deller <deller@....de>,
 Andrea Arcangeli <aarcange@...hat.com>,
 Oleg Nesterov <oleg@...hat.com>,
 Linux-MM <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>,
 Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [RFC PATCH] Randomization of address chosen by mmap.




> On 5 Mar 2018, at 19:23, Matthew Wilcox <willy@...radead.org> wrote:
> 
> On Mon, Mar 05, 2018 at 04:09:31PM +0300, Ilya Smith wrote:
>> 
>> I’m analysing that approach and see much more problems:
>> - each time you call mmap like this, you still  increase count of vmas as my 
>> patch did
> 
> Umm ... yes, each time you call mmap, you get a VMA.  I'm not sure why
> that's a problem with my patch.  I was trying to solve the problem Daniel
> pointed out, that mapping a guard region after each mmap cost twice as
> many VMAs, and it solves that problem.
> 
The issue was in VMAs count as Daniel mentioned. 
The more count, the harder walk tree. I think this is fine

>> - the entropy you provide is like 16 bit, that is really not so hard to brute
> 
> It's 16 bits per mapping.  I think that'll make enough attacks harder
> to be worthwhile.

Well yes, its ok, sorry. I just would like to have 32 bit entropy maximum some day :)

>> - in your patch you don’t use vm_guard at address searching, I see many roots 
>> of bugs here
> 
> Don't need to.  vm_end includes the guard pages.
> 
>> - if you unmap/remap one page inside region, field vma_guard will show head 
>> or tail pages for vma, not both; kernel don’t know how to handle it
> 
> There are no head pages.  The guard pages are only placed after the real end.
> 

Ok, we have MG where G = vm_guard, right? so when you do vm_split, 
you may come to situation - m1g1m2G, how to handle it? I mean when M is 
split with only one page inside this region. How to handle it?

>> - user mode now choose entropy with PROT_GUARD macro, where did he gets it? 
>> User mode shouldn’t be responsible for entropy at all
> 
> I can't agree with that.  The user has plenty of opportunities to get
> randomness; from /dev/random is the easiest, but you could also do timing
> attacks on your own cachelines, for example.

I think the usual case to use randomization for any mmap or not use it at all 
for whole process. So here I think would be nice to have some variable 
changeable with sysctl (root only) and ioctl (for greedy processes).

Well, let me summary:
My approach chose random gap inside gap range with following strings:

+	addr = get_random_long() % ((high - low) >> PAGE_SHIFT);
+	addr = low + (addr << PAGE_SHIFT);

Could be improved limiting maximum possible entropy in this shift.
To prevent situation when attacker may massage allocations and 
predict chosen address, I randomly choose memory region. I’m still
like my idea, but not going to push it anymore, since you have yours now.

Your idea just provide random non-mappable and non-accessable offset
from best-fit region. This consumes memory (1GB gap if random value 
is 0xffff). But it works and should work faster and should resolve the issue.  

My point was that current implementation need to be changed and you
have your own approach for that. :)
Lets keep mine in the mind till better times (or worse?) ;)
Will you finish your approach and upstream it?

Best regards,
Ilya

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.