Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 25 May 2020 17:45:33 +0200
From: Pirmin Walthert <pirmin.walthert@...om.ch>
To: musl@...ts.openwall.com
Subject: Re: mallocng progress and growth chart

Am 18.05.20 um 20:53 schrieb Rich Felker:

> On Sat, May 16, 2020 at 11:30:25PM -0400, Rich Felker wrote:
>> Another alternative for avoiding eagar commit at low usage, which
>> works for all but nommu: when adding groups with nontrivial slot count
>> at low usage, don't activate all the slots right away. Reserve vm
>> space for 7 slots for a 7x4672, but only unprotect the first 2 pages,
>> and treat it as a group of just 1 slot until there are no slots free
>> and one is needed. Then, unprotect another page (or more if needed to
>> fit another slot, as would be needed at larger sizes) and adjust the
>> slot count to match. (Conceptually; implementation-wise, the slot
>> count would be fixed, and there would just be a limit on the number of
>> slots made avilable when transformed from "freed" to "available" for
>> activation.)
>>
>> Note that this is what happens anyway with physical memory as clean
>> anonymous pages are first touched, but (1) doing it without explicit
>> unprotect over-counts the not-yet-used slots for commit charge
>> purposes and breaks tightly-memory-constrained environments (global
>> commit limit or cgroup) and (2) when all slots are initially available
>> as they are now, repeated free/malloc cycles for the same size will
>> round-robin all the slots, touching them all.
>>
>> Here, property (2) is of course desirable for hardening at moderate to
>> high usage, but at low usage UAF tends to be less of a concern
>> (because you don't have complex data structures with complex lifetimes
>> if you hardly have any malloc).
>> c
>> Note also that (2) could be solved without addressing (1) just by
>> skipping the protection aspect of this idea and only using the
>> available-slot-limiting part.
> One abstract way of thinking about the above is that it's just a
> per-size-class bump allocator, pre-reserving enough virtual address
> space to end sufficiently close to a page boundary that there's no
> significant memory waste. This is actually fairly elegant, and might
> obsolete some of the other measures taken to avoid overly eagar
> allocation. So this might be a worthwhile direction to pursue.

Dear Rich,

Currently we use mallocng in production for most applications in our 
"embedded like" virtualised system setups, it even helped to find some 
bugs (for example in asterisk) as mallocng was less forgiving than the 
old malloc implementation. So if you're interested in real world 
feedback: everything seems to be running quite smoothly so far, thanks 
for this great work.

Currently we use the git version of April 24th, so the version before 
you merged the huge optimization changes. As you mentioned in your 
"brainstorming mails", if I got them right, that you might rethink a few 
of these changes, I'd like to ask: do you think it would be better to 
use the current git-master version rather than the version of April 24th 
(we are not THAT memory constrained, so stability is the most important 
thing) or do you think it would be better to stick on the old version 
and wait for the next changes to be merged?

Best regards,

Pirmin

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.