Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180704145629.GQ1392@brightrain.aerifal.cx>
Date: Wed, 4 Jul 2018 10:56:29 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: Changing MMAP_THRESHOLD in malloc() implementation.

On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
> sir above statistics are with 64MB page size. we are using system which
> having 2MB(large page)
> 64MB(huge page).

If this is a custom architecture, have you considered using variable
pagesize like x86 and others? Unconditionally using large/huge pages
for everything seems like a really, really bad idea. Aside from
wasting memory, it makes COW latency spikes really high (memcpy a
whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
from a block-device-backed filesystem.

Rich

> On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <rdssonawane2317@...il.com>
> wrote:
> 
> > Thank you very much for instant reply..
> >
> > Yes sir it is wasting memory for each shared library. But memory wastage
> > even worse when program
> > requesting memory with size more than 224KB(threshold value).
> >
> > ->If a program requests 1GB per request, it can use 45GB at the most.
> > ->If a program requests 512MB per request, it can use 41.5GB at the most.
> > ->If a program requests 225KB per request, it can use about 167MB at the
> > most.
> >
> > As we ported  musl-1.1.14 for our architecture, we are bounded to make
> > change in same base code.
> > we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
> > for bin index .
> > after that observed improvement in memory utilization. i.e for size 225KB
> > memory used is 47.6 GB.
> >
> > But now facing problem in multi threaded application. As we haven't
> > changed the function pretrim()
> > because there are some hard coded values like '40' and '3' used and
> > unable to understand how
> > these values are decided ..?
> >
> > static int pretrim(struct chunk *self, size_t n, int i, int j)
> > {
> >         size_t n1;
> >         struct chunk *next, *split;
> >
> >         /* We cannot pretrim if it would require re-binning. */
> >         if (j < 40) return 0;
> >         if (j < i+3) {
> >                 if (j != 63) return 0;
> >                 n1 = CHUNK_SIZE(self);
> >                 if (n1-n <= MMAP_THRESHOLD) return 0;
> >         } else {
> >                 n1 = CHUNK_SIZE(self);
> >         }
> >  .....
> >  .....
> > ......
> > }
> >
> > can we get any clue how these value are decided, it will be very
> > helpful for us.
> >
> > Best Regard,
> > Ritesh Sonawane
> >
> > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@...c.org> wrote:
> >
> >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> >> > Hi All,
> >> >
> >> > We are using musl-1.1.14 version for our architecture. It is having page
> >> > size of 2MB.
> >> > Due to low threshold value there is more memory wastage. So we want to
> >> > change the value of  MMAP_THRESHOLD.
> >> >
> >> > can anyone please giude us, which factor need to consider to change this
> >> > threshold value ?
> >>
> >> It's not a parameter that can be changed but linked to the scale of
> >> bin sizes. There is no framework to track and reuse freed chunks which
> >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
> >> waste from page granularity with unrecoverable waste from inability to
> >> reuse these larger freed chunks except breaking them up into pieces to
> >> satisfy smaller requests.
> >>
> >> I may look into handling this better when replacing musl's malloc at
> >> some point, but if your system forces you to use ridiculously large
> >> pages like 2MB, you've basically committed to wasting huge amounts of
> >> memory anyway (at least 2MB for each shared library in each
> >> process)...
> >>
> >> With musl git-master and future releases, you have the option to link
> >> a malloc replacement library that might be a decent solution to your
> >> problem.
> >>
> >> Rich
> >>
> >
> >

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.