Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 18 Jul 2017 13:27:54 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: [PATCH] malloc/expand_heap.c: really try to use all
 available memory

On Tue, Jul 18, 2017 at 08:52:59AM -0400, Steven Walter wrote:
> Previously expand_heap would ask for increasingly larger mmap areas
> every time, in order to avoid allocating a bunch of small areas and
> fragmenting memory.  However, after a point, we may ask for an mmap area
> so large that it can't possibly succeed, even though there is still
> system memory available.  This is particularly likely on a 32-bit system
> with gigs of RAM, or else on a no-MMU system with pretty much any amount
> of RAM.  Without an MMU to make physically-discontiguous pages appear
> contigious, the chance of any large mmap succeeding are very low.

This is partly intentional, since beyond this point fragmentation is
expected to be catastrophic to the point that it would prevent mmap
(and on nommu, bring down the whole system). But if it's seriously
capping how much memory you can get with malloc on a 32-bit system
(esp one with mmu), some tweaking is probably needed.

> To fix this, support decreasing mmap_step once we hit an allocation
> failure.  We'll try smaller and smaller amounts until we either ask for
> a single page or exactly as many pages as we currently need.  Only if
> that fails do we fail the overall request.

This is probably a really bad idea, though I don't have data to back
that up; it's just an intuition about what happens. Would it work to
instead decrease the growth rate on requests? If you're at the point
where a single page (or a few pages) makes the difference between
success and failure, it's already highly unpredictable (due to aslr &
other memory layout factors) whether your job is going to succeed
either way. If you get in a situation where there are hundreds of megs
free but you can't use them, that's a real problem, but I think it's
one we can solve without dropping down to small requests.

Note that there's also a bad fragmentation behavior from your proposal
if a large expand_heap is only temporarily blocked, e.g. by a
short-lived mmap. In that case, dropping down to smaller requests will
just fragment memory horribly so that subsequent requests (after the
mmap is released) will fail even worse.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.