Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 5 Jul 2018 12:32:24 +0530
From: ritesh sonawane <rdssonawane2317@...il.com>
To: musl@...ts.openwall.com
Subject: Re: Changing MMAP_THRESHOLD in malloc() implementation.

 >If this is a custom architecture, have you considered using variable
>pagesize like x86 and others? Unconditionally using large/huge pages
>for everything seems like a really, really bad idea. Aside from
>wasting memory, it makes COW latency spikes really high (memcpy a
>whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
>from a block-device-backed filesystem.

Currently our architecture is not supporting COW and demand paging.
page size is decided at compile time only or it can be passed  to mmap()
syscall like x86  to change page size during run time new memory requests.


On Thu, Jul 5, 2018 at 12:22 PM, ritesh sonawane <rdssonawane2317@...il.com>
wrote:

> > How does this happen? The behavior you should see is just rounding up
> > of the request to a multiple of the page size, not scaling of the
> > request. Maybe I don't understand what you're saying here.
>
> In our case if threshold value is 224KB then all request more than 224KB
> will allocate memory
> using mmap() only. In that case size will be aligned to page
> boundary(64MB).
> So user if application request size 225KB multiple time then for every
> request of 225KB one page of
> (64MB) will be allocated.  It means if user request 225KB five time then
> according to
> user (225KB x 5 = 1125KB) is consumed,  But actual memory consumed is
> (64MB x 5 = 320MB).
>
>
> On Wed, Jul 4, 2018 at 8:26 PM, Rich Felker <dalias@...c.org> wrote:
>
>> On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
>> > sir above statistics are with 64MB page size. we are using system which
>> > having 2MB(large page)
>> > 64MB(huge page).
>>
>> If this is a custom architecture, have you considered using variable
>> pagesize like x86 and others? Unconditionally using large/huge pages
>> for everything seems like a really, really bad idea. Aside from
>> wasting memory, it makes COW latency spikes really high (memcpy a
>> whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
>> from a block-device-backed filesystem.
>>
>> Rich
>>
>> > On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <
>> rdssonawane2317@...il.com>
>> > wrote:
>> >
>> > > Thank you very much for instant reply..
>> > >
>> > > Yes sir it is wasting memory for each shared library. But memory
>> wastage
>> > > even worse when program
>> > > requesting memory with size more than 224KB(threshold value).
>> > >
>> > > ->If a program requests 1GB per request, it can use 45GB at the most.
>> > > ->If a program requests 512MB per request, it can use 41.5GB at the
>> most.
>> > > ->If a program requests 225KB per request, it can use about 167MB at
>> the
>> > > most.
>> > >
>> > > As we ported  musl-1.1.14 for our architecture, we are bounded to make
>> > > change in same base code.
>> > > we have increased  MMAP_THRESHOLD to 1GB and also changes the
>> calculation
>> > > for bin index .
>> > > after that observed improvement in memory utilization. i.e for size
>> 225KB
>> > > memory used is 47.6 GB.
>> > >
>> > > But now facing problem in multi threaded application. As we haven't
>> > > changed the function pretrim()
>> > > because there are some hard coded values like '40' and '3' used and
>> > > unable to understand how
>> > > these values are decided ..?
>> > >
>> > > static int pretrim(struct chunk *self, size_t n, int i, int j)
>> > > {
>> > >         size_t n1;
>> > >         struct chunk *next, *split;
>> > >
>> > >         /* We cannot pretrim if it would require re-binning. */
>> > >         if (j < 40) return 0;
>> > >         if (j < i+3) {
>> > >                 if (j != 63) return 0;
>> > >                 n1 = CHUNK_SIZE(self);
>> > >                 if (n1-n <= MMAP_THRESHOLD) return 0;
>> > >         } else {
>> > >                 n1 = CHUNK_SIZE(self);
>> > >         }
>> > >  .....
>> > >  .....
>> > > ......
>> > > }
>> > >
>> > > can we get any clue how these value are decided, it will be very
>> > > helpful for us.
>> > >
>> > > Best Regard,
>> > > Ritesh Sonawane
>> > >
>> > > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@...c.org> wrote:
>> > >
>> > >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
>> > >> > Hi All,
>> > >> >
>> > >> > We are using musl-1.1.14 version for our architecture. It is
>> having page
>> > >> > size of 2MB.
>> > >> > Due to low threshold value there is more memory wastage. So we
>> want to
>> > >> > change the value of  MMAP_THRESHOLD.
>> > >> >
>> > >> > can anyone please giude us, which factor need to consider to
>> change this
>> > >> > threshold value ?
>> > >>
>> > >> It's not a parameter that can be changed but linked to the scale of
>> > >> bin sizes. There is no framework to track and reuse freed chunks
>> which
>> > >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
>> > >> waste from page granularity with unrecoverable waste from inability
>> to
>> > >> reuse these larger freed chunks except breaking them up into pieces
>> to
>> > >> satisfy smaller requests.
>> > >>
>> > >> I may look into handling this better when replacing musl's malloc at
>> > >> some point, but if your system forces you to use ridiculously large
>> > >> pages like 2MB, you've basically committed to wasting huge amounts of
>> > >> memory anyway (at least 2MB for each shared library in each
>> > >> process)...
>> > >>
>> > >> With musl git-master and future releases, you have the option to link
>> > >> a malloc replacement library that might be a decent solution to your
>> > >> problem.
>> > >>
>> > >> Rich
>> > >>
>> > >
>> > >
>>
>
>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Your e-mail address:

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.