Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 19 Sep 2022 23:28:06 -0400
From: Rich Felker <dalias@...c.org>
To: baiyang <baiyang@...il.com>
Cc: musl <musl@...ts.openwall.com>
Subject: Re: Re: The heap memory performance (malloc/free/realloc) is
 significantly degraded in musl 1.2 (compared to 1.1)

On Tue, Sep 20, 2022 at 10:35:02AM +0800, baiyang wrote:
> > You seem to think that if the group stride was 8100, calling realloc might memcpy up to 8100 bytes. This is not the case.
> 
> Yes, I already understood that mallocng would only memcpy 6600 bytes
> when I was told that malloc_usable_size will return the size
> requested by the user.
> 
> But AFAIK, many other malloc implementations basically don't keep
> 6600 bytes of data. So they're actually going to memcpy the 8100
> bytes.

You have made up a problem that does not exist. Specifically, at least
as far as I can tell, no such implementation exists.

The ones that return some value larger than the requested size are
returning "the requested size, rounded up to a multiple of 16" or
similar. Not "the requested size plus 1500 bytes". For the
dlmalloc-style allocators, this is because they do not have uniform
slots for size classes; they have arbitrary splits, and only care
about aligning start/end on a properly aligned boundary. And for more
slab-like allocators, they all keep track of at least a coarse "actual
size" specifically so that realloc does not gratuitously copy large
amounts of unnecessary data (among other reasons).

> > You also seem to be under the impression that the work to determine
> > that the size was 6600 and not 8100 is where most (or at least a
> > significant portion of) the time is spent.  This is also not the case.
> > The majority of the metadata processing time is chasing pointers back
> > to the out-of-band metadata, validating it, validating that it
> > round-trips back, and validating various other things. Some of these
> > could in principle be omitted at the cost of loss-of-hardening.
> 
> Yes, according to my previous understanding (which seems wrong now),
> since other malloc_usable_size implementations that directly return
> 8100 (the actual allocated size class length)

They don't return 8100. They return something like 6608 or 6624.

> such as tcmalloc are
> all very fast, so I can only understand that mallocng is so much
> slower than them because it has to return 6600, not 8100.

This does not follow at all. tcmalloc is fast because it does not have
global consistency, does not have any notable hardening, and (see the
name) keeps large numbers of freed slots *cached* to reuse, thereby
using lots of extra memory. Its malloc_usable_size is not fast because
of returning the wrong value, if it even does return the wrong value
(I have no idea). It's fast because they store the size in-band right
next to the allocated memory and trust that it's valid, rather than
computing it from out-of-band metadata that is not subject to
falsification unless the attacker already has nearly full control of
execution.

> Apart from
> this difference, there is no reason it is slower than other
> implementations of malloc_usable_size as I understand it.

Comparing the speed of malloc_usable_size is utterly pointless. That
is not where the time is spent in any real-world load that's not
gratuitously doing stupid things. I imagine tcmalloc's is faster, for
the reasons explained above (which have nothing to do with whether the
value returned is the exact size you requested). But this has nothing
to do with why the overall performance is higher.

> If this is not the main reason, can we speed up this algorithm with
> the help of a fast lookup table mechanism like tcmalloc? As I said
> before, this not only greatly increases the performance of
> malloc_usable_size , but also the performance of realloc and free .

No, because the fundamental difference is where you're storing the
data, and the *whole point* of mallocng is that we're storing it in a
place where it's not subject to attack via overflows from other
objects, UAF/DF, etc. This makes it somewhat costlier (but still very
reasonable) to obtain.

If you have a particular application where you actually want to trade
safety and low memory usage for performance, you're perfectly free to
link that application to whatever malloc implementation you like. musl
has supported doing that since 1.1.20. It makes no sense to do this
system-wide. You would just increase memory usage and attack surface
drastically with nothing to show for it, since the vast majority of
programs *don't care* how long malloc takes.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.