Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 28 Feb 2018 20:13:00 +0300
From: Ilya Smith <blackzert@...il.com>
To: Kees Cook <keescook@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Dan Williams <dan.j.williams@...el.com>,
 Michal Hocko <mhocko@...e.com>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 Jan Kara <jack@...e.cz>,
 Jerome Glisse <jglisse@...hat.com>,
 Hugh Dickins <hughd@...gle.com>,
 Matthew Wilcox <willy@...radead.org>,
 Helge Deller <deller@....de>,
 Andrea Arcangeli <aarcange@...hat.com>,
 Oleg Nesterov <oleg@...hat.com>,
 Linux-MM <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>,
 Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [RFC PATCH] Randomization of address chosen by mmap.

Hello Kees,

Thanks for your time spent on that!

> On 27 Feb 2018, at 23:52, Kees Cook <keescook@...omium.org> wrote:
> 
> I'd like more details on the threat model here; if it's just a matter
> of .so loading order, I wonder if load order randomization would get a
> comparable level of uncertainty without the memory fragmentation,
> like:
> https://android-review.googlesource.com/c/platform/bionic/+/178130/2
> If glibc, for example, could do this too, it would go a long way to
> improving things. Obviously, it's not as extreme as loading stuff all
> over the place, but it seems like the effect for an attack would be
> similar. The search _area_ remains small, but the ordering wouldn't be
> deterministic any more.
> 

I’m afraid library order randomization wouldn’t help much, there are several 
cases described in chapter 2 here: 
http://www.openwall.com/lists/oss-security/2018/02/27/5
when it is possible to bypass ASLR. 

I’m agree library randomizaiton is a good improvement but after my patch
I think not much valuable. On my GitHub https://github.com/blackzert/aslur 
I provided tests and will make them 'all in one’ chain later.

> It would be worth spelling out the "not recommended" bit some more
> too: this fragments the mmap space, which has some serious issues on
> smaller address spaces if you get into a situation where you cannot
> allocate a hole large enough between the other allocations.
> 

I’m agree, that's the point.

>> vm_unmapped_area(struct vm_unmapped_area_info *info)
>> {
>> +       if (current->flags & PF_RANDOMIZE)
>> +               return unmapped_area_random(info);
> 
> I think this will need a larger knob -- doing this by default is
> likely to break stuff, I'd imagine? Bikeshedding: I'm not sure if this
> should be setting "3" for /proc/sys/kernel/randomize_va_space, or a
> separate one like /proc/sys/mm/randomize_mmap_allocation.

I will improve it like you said. It looks like a better option.

>> +       // first lets find right border with unmapped_area_topdown
> 
> Nit: kernel comments are /* */. (It's a good idea to run patches
> through scripts/checkpatch.pl first.)
> 

Sorry, I will fix it. Thanks!


>> +                       if (!rb_parent(prev))
>> +                               return -ENOMEM;
>> +                       vma = rb_entry(rb_parent(prev),
>> +                                      struct vm_area_struct, vm_rb);
>> +                       if (prev == vma->vm_rb.rb_right) {
>> +                               gap_start = vma->vm_prev ?
>> +                                       vm_end_gap(vma->vm_prev) : 0;
>> +                               goto check_current_down;
>> +                       }
>> +               }
>> +       }
> 
> Everything from here up is identical to the existing
> unmapped_area_topdown(), yes? This likely needs to be refactored
> instead of copy/pasted, and adjust to handle both unmapped_area() and
> unmapped_area_topdown().
> 

This part also keeps ‘right_vma' as a border. If it is ok, that combined version
 will return vma struct, I’ll do it.

>> +               /* Go back up the rbtree to find next candidate node */
>> +               while (true) {
>> +                       struct rb_node *prev = &vma->vm_rb;
>> +
>> +                       if (!rb_parent(prev))
>> +                               BUG(); // this should not happen
>> +                       vma = rb_entry(rb_parent(prev),
>> +                                      struct vm_area_struct, vm_rb);
>> +                       if (prev == vma->vm_rb.rb_left) {
>> +                               gap_start = vm_end_gap(vma->vm_prev);
>> +                               gap_end = vm_start_gap(vma);
>> +                               if (vma == right_vma)
> 
> mm/mmap.c: In function ‘unmapped_area_random’:
> mm/mmap.c:1939:8: warning: ‘vma’ may be used uninitialized in this
> function [-Wmaybe-uninitialized]
>     if (vma == right_vma)
>        ^

Thanks, fixed!

>> +                                       break;
>> +                               goto check_current_up;
>> +                       }
>> +               }
>> +       }
> 
> What are the two phases here? Could this second one get collapsed into
> the first?
> 

Let me explain. 
1. we use current implementation to get larger address. Remember it as 
‘right_vma’.
2. we walk tree from mm->mmap what is lowest vma.
3. we check if current vma gap satisfies length and low/high constrains
4. if so, we call random() to decide if we choose it. This how we randomly choos
e vma and gap
5. we walk tree from lowest vma to highest and ignore subtrees with less gap. 
we do it until reach ‘right_vma’

Once we found gap, we may randomly choose address inside it.

>> +       addr = get_random_long() % ((high - low) >> PAGE_SHIFT);
>> +       addr = low + (addr << PAGE_SHIFT);
>> +       return addr;
>> 
> 
> How large are the gaps intended to be? Looking at the gaps on
> something like Xorg they differ a lot.

Sorry, I can’t get clue. What's the context? You tried patch or whats the case?

Thanks,
Ilya



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.