Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 30 Jul 2018 20:47:28 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: malloc implementation survey: omalloc

On Sun, Jul 29, 2018 at 09:26:18PM +0200, Markus Wichmann wrote:
> Hi all,
> 
> we discussed rewriting malloc() a while back, because, as I recall, Rich
> wasn't satisfied with the internal storage the current system is using
> (i.e. metadata is stored with the returned pointer) as well as some
> corner cases on lock contention, although fine grained locking is a nice
> feature in itself.
> 
> I therefore had a look at existing malloc() algorithms, the rationale
> being that I thought malloc() to be a solved problem, so we only have to
> find the right solution.
> 
> As it turns out, it appears Doug Lea was simply too successful: Many
> allocators follow his pattern in one way or another. But some systems
> buck the trend.
> 
> So today I found omalloc, the allocator OpenBSD uses, in this nice
> repository:
> 
> https://github.com/emeryberger/Malloc-Implementations

I haven't looked at it in a couple years, but last I did, the OpenBSD
allocator was not practical. It used individual mmaps for even
moderately large allocations, and used guard pages with each, which
with the default Linux VMA limits puts an upper bound of 32768 on the
number of non-tiny (roughly, larger-than-page IIRC) allocations.

It does have much stronger hardening against overflows than musl's
current malloc or any other allocator, but it seemed inferior in all
other ways.

I'll read the rest of your description later and see if there's
anything new that's interesting.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.