Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 27 May 2014 15:01:34 -0400
From: Rich Felker <>
Subject: Re: [UGLY PATCH v3] Support for no-legacy-syscalls archs

On Tue, May 27, 2014 at 01:29:54PM +0200, Szabolcs Nagy wrote:
> * Rich Felker <> [2014-05-27 01:26:25 -0400]:
> > In the case of epoll, etc. I reworked the code to always prefer the
> > new syscalls and only fallback to the old ones if (1) they exist, and
> > (2) the new one returned ENOSYS. signalfd already has fallback code
> > that handles FD_CLOEXEC as best it can when the new syscall is not
> > available, but the others seem to be lacking the proper fallback code
> > so I should perhaps add that too.
> SOCK_CLOEXEC and SOCK_NONBLOCK fallbacks for socketpair are missing too

I thought I'd already added them, but you're right.

Perhaps if we're emulating the fallback cases we should go ahead and
make it race-free. This can be done by a combination of locking and
signal masking that prevents fork, posix_spawn, and exec* during the
interval where the close-on-exec flag is not yet set on the new fd. It
would basically look like:

[do fallback work here]

This would not work if the fallback work were supposed to be
interruptible by signals; I'm not sure if any such cases would need to
be considered.

Our fork and posix_spawn implementations already run with all signals
blocked, so all that would need to be added to them is the locking.
Adding the code to exec* would incur some additional cost, and I'm
actually not sure how to do it right without the new program
inheriting the all-blocked signal mask.

> > +#ifdef SYS_pausex
> > +		__syscall(SYS_pause);
> > +#else
> > +		__syscall(SYS_futex, &lock, FUTEX_WAIT, 1, 0);
> > +#endif
> pausex typo

Actually it was my cheap test of the fallback case that I forgot to
remove. :) Fixed. BTW do you have any better ideas for a fallback
here? Maybe some sleep variant? I just want something that will avoid
spinning and consuming cpu while we wait for exit to finish.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.