Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Aug 2016 16:36:02 -0400
From: Rich Felker <dalias@...ifal.cx>
To: Rob Landley <rob@...dley.net>
Cc: musl@...ts.openwall.com
Subject: Re: sysconf(_SC_ARG_MAX) broken in musl.

On Wed, Aug 10, 2016 at 02:28:22PM -0500, Rob Landley wrote:
> I draw your attention to the end of:
> 
> http://lists.landley.net/pipermail/toybox-landley.net/2016-August/008592.html
> 
> tl;dr: in 2007 linux switched ARG_MAX to 1/4 the stack size ulimit (with
> a minimum of 131072 and no I dunno what happens if you try to launch a
> program with lots of argument data when stack size is < 131072).
> 
> Current glibc sysconf() checks the getrlimit() value and returns 1/4 of
> it (or 131072). Musl is returning the hardwired 131072 value from before
> 2007.

I'm aware of that change in the kernel that allows larger
argument/environment contingent on resource limits, but I'm not clear
that there's any good reason for ARG_MAX/sysconf(_SC_ARG_MAX) to
reflect this. My understanding of the normal usage pattern for ARG_MAX
is that programs (e.g. xargs, find) that might be passing a large
number of arguments to external programs use it to ensure that the
exec will not fail due to sending too many at a time. For this purpose
using the fixed 128k limit works perfectly well and reduces memory
usage and the risk of failure to exec due to ENOMEM.

Has anyone else looked into the issue enough to have a good opinion on
it, or at least additional information that would add to discussion?

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.