Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 3 Mar 2018 16:58:40 +0300
From: Ilya Smith <blackzert@...il.com>
To: Daniel Micay <danielmicay@...il.com>
Cc: Matthew Wilcox <willy@...radead.org>,
 Kees Cook <keescook@...omium.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Dan Williams <dan.j.williams@...el.com>,
 Michal Hocko <mhocko@...e.com>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 Jan Kara <jack@...e.cz>,
 Jerome Glisse <jglisse@...hat.com>,
 Hugh Dickins <hughd@...gle.com>,
 Helge Deller <deller@....de>,
 Andrea Arcangeli <aarcange@...hat.com>,
 Oleg Nesterov <oleg@...hat.com>,
 Linux-MM <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>,
 Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [RFC PATCH] Randomization of address chosen by mmap.

Hello Daniel, thanks for sharing you experience!

> On 1 Mar 2018, at 00:02, Daniel Micay <danielmicay@...il.com> wrote:
> 
> I don't think it makes sense for the kernel to attempt mitigations to
> hide libraries. The best way to do that is in userspace, by having the
> linker reserve a large PROT_NONE region for mapping libraries (both at
> initialization and for dlopen) including a random gap to act as a
> separate ASLR base.
Why this is the best and what is the limit of this large region?
Let’s think out of the box.
What you say here means you made a separate memory region for libraries without 
changing kernel. But the basic idea - you have a separate region for libraries 
only. Probably you also want to have separate regions for any thread stack, for 
mmaped files, shared memory, etc. This allows to protect memory regions of 
different types from each other. It is impossible to implement this without 
keeping the whole memory map. This map should be secure from any leak attack to 
prevent ASLR bypass. The only one way to do it is to implement it in the kernel 
and provide different syscalls like uselib or allocstack, etc. This one is 
really hard in current kernel implementation.

My approach was to hide memory regions from attacker and from each other.

> If an attacker has library addresses, it's hard to
> see much point in hiding the other libraries from them.

In some cases attacker has only one leak for whole attack. And we should do the best
to make even this leak useless.

> It does make
> sense to keep them from knowing the location of any executable code if
> they leak non-library addresses. An isolated library region + gap is a
> feature we implemented in CopperheadOS and it works well, although we
> haven't ported it to Android 7.x or 8.x.
This one interesting to know and I would like to try to attack it, but it's out of the
scope of current conversation.

> I don't think the kernel can
> bring much / anything to the table for it. It's inherently the
> responsibility of libc to randomize the lower bits for secondary
> stacks too.

I think any bit of secondary stack should be randomized to provide attacker as
less information as we can.

> Fine-grained randomized mmap isn't going to be used if it causes
> unpredictable levels of fragmentation or has a high / unpredictable
> performance cost.

Lets pretend any chosen address is pure random and always satisfies request. At 
some time we failed to mmap new chunk with size N. What does this means? This 
means that all chunks with size of N are occupied and we even can’t find place 
between them. Now lets count already allocated memory. Let’s pretend on all of 
these occupied chunks lies one page minimum. So count of these pages is 
TASK_SIZE / N. Total bytes already allocated is PASGE_SIZE * TASK_SIZE / N. Now 
we can calculate. TASK_SIZE is 2^48 bytes. PAGE_SIZE 4096. If N is 1MB, 
allocated memory minimum 1125899906842624, that is very big number. Ok. is N is 
256 MB, we already consumed 4TB of memory. And this one is still ok. if N is 
1GB we allocated 1GB and it looks like a problem. If we allocated 1GB of memory 
we can’t mmap chunk size of 1GB. Sounds scary, but this is absolutely bad case 
when we consume 1 page on 1GB chunk. In reality  this number would be much 
bigger and random according this patch.

Here lets stop and think - if we know that application going to consume memory. 
The question here would be can we protect it? Attacker will know he has a good 
probability to guess address with read permissions. In this case ASLR may not 
work at all. For such applications we can turn off address randomization or 
decrease entropy level since it any way will not help much.

Would be good to know whats the performance costs you can see here. Can
you please tell?

> I don't think it makes sense to approach it
> aggressively in a way that people can't use. The OpenBSD randomized
> mmap is a fairly conservative implementation to avoid causing
> excessive fragmentation. I think they do a bit more than adding random
> gaps by switching between different 'pivots' but that isn't very high
> benefit. The main benefit is having random bits of unmapped space all
> over the heap when combined with their hardened allocator which
> heavily uses small mmap mappings and has a fair bit of malloc-level
> randomization (it's a bitmap / hash table based slab allocator using
> 4k regions with a page span cache and we use a port of it to Android
> with added hardening features but we're missing the fine-grained mmap
> rand it's meant to have underneath what it does itself).
> 

So you think OpenBSD implementation even better? It seems like you like it
after all.

> The default vm.max_map_count = 65530 is also a major problem for doing
> fine-grained mmap randomization of any kind and there's the 32-bit
> reference count overflow issue on high memory machines with
> max_map_count * pid_max which isn't resolved yet.

I’ve read a thread about it. This one is what should be fixed anyway.

Thanks,
Ilya

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.