Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 23 Mar 2013 22:57:51 -0400
From: Zvi Gilboa <zg7s@...rvices.virginia.edu>
To: <musl@...ts.openwall.com>
Subject: Re: Difficulty emulating F_DUPFD_CLOEXEC

On 03/23/2013 10:33 PM, Rich Felker wrote:
> On Sat, Mar 23, 2013 at 10:27:14PM -0400, Zvi Gilboa wrote:
>> On 03/23/2013 10:17 PM, Rich Felker wrote:
>>> On Sat, Mar 23, 2013 at 10:10:10PM -0400, Zvi Gilboa wrote:
>>>>> This uglifies fcntl.c a bit more, but I think it works. Does the above
>>>>> reasoning make sense? Any other ideas?
>>>> In the hope that this matches the project's spirit... how about
>>>> running these tests during the build, and have a script (or a simple
>>>> test program) #define whether the target architecture supports
>>>> F_DUPFD_CLOEXEC or not?  Potentially, this test could be added at
>>>> the very end of alltypes.h.sh
>>> It's not a matter of the architecture. It's a matter of the kernel
>>> version. F_DUPFD_CLOEXEC was not available until late in the 2.6
>>> series, and musl aims to "mostly work" even with early 2.6, and to
>>> "partly work" (at least for single-threaded programs that don't use
>>> modern features) even on 2.4. For dynamic linking, it could make sense
>>> to have a slimmer version of libc.so that only supports up-to-date
>>> kernels, but for static linking, it's really frustrating to have
>>> binaries that break on older kernels, even if it is the broken
>>> kernel's fault.
>>>
>>> If the lack of these features were just breaking _apps_ that use them,
>>> it would be one thing, but several of the very-new atomic
>>> close-on-exec interfaces needed internally in musl for some core
>>> functionality -- things like dns lookups, popen, system, etc. Thus
>>> failure to emulate them when the kernel doesn't have working versions
>>> could badly break even "simple" apps that would otherwise be expected
>>> to work even on old kernels.
>> .... thanks, that makes perfect sense.  As a second-best try: would
>> it makes sense to run the long feature test just once, during
>> startup (and save its result to some global variable), instead of
>> inside fcntl.c?
> The long test only runs when the fcntl syscall returns -EINVAL. So it
> should never happen at all on a modern kernel with correct usage. The
> tradeoff for saving the result of the test is that, by doing so, you
> would get slightly improved performance on old kernels that lack
> F_DUPFD_CLOEXEC, or when making a stupid fcntl call that would return
> EINVAL even on a new kernel, but you would add more global data.
>
> One byte of data should not really matter, but in a sense it does
> because you have to worry that the linking order might put it in the
> middle of an otherwise-unused page and thus increase the memory usage
> of the whole process by a page. This is actually an issue we need to
> address -- libc.so's memory usage is a lot higher than it should be
> right now, because the global vars that get touched by EVERY process
> at startup time are spread out across several pages. Ordering things
> so that they all end up in the same page would fix it, but I'm still
> looking for the cleanest way to do that..

... could a structure holding all global variables be considered a clean 
way to achieve this?  That would obviously mandate some code changes 
(adding the extern declaration to the relevant modules, and accordingly 
referring to global_vars.var_one instead of just to var_one), but at 
least "guarantee" that they are all kept together on the same page (or 
two or three)...

>
> Anyway, I think it's best not to save the results of fallback tests
> like this. There's also an issue of making sure the saved result is
> properly synchronized, which I haven't even touched on yet..
>
> Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.