Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 14 Feb 2019 17:32:24 -0500
From: Rich Felker <dalias@...c.org>
To: Alexey Izbyshev <izbyshev@...ras.ru>
Cc: musl@...ts.openwall.com
Subject: Re: Draft outline of thread-list design

On Fri, Feb 15, 2019 at 12:16:39AM +0300, Alexey Izbyshev wrote:
> On 2019-02-12 21:26, Rich Felker wrote:
> >pthread_join:
> >
> >A joiner can no longer see the exit of the individual kernel thread
> >via the exit futex (detach_state), so after seeing it in an exiting
> >state, it must instead use the thread list to confirm completion of
> >exit. The obvious way to do this is by taking a lock on the list and
> >immediately releasing it, but the actual taking of the lock can be
> >elided by simply doing a futex wait on the lock owner being equal to
> >the tid (or an exit sequence number if we prefer that) of the exiting
> >thread. In the case of tid reuse collisions, at worse this reverts to
> >the cost of waiting for the lock to be released.
> >
> Since the kernel wakes only a single thread waiting on ctid address,
> wouldn't the joiner still need to do a futex wake to unblock other
> potential waiters even if it doesn't actually take the lock by
> changing *ctid?

I'm not sure. If it's just a single wake rather than a broadcast then
yes, but only if it waited. If it observed the lock word != to the
exiting thread tid without performing a futex wait then it doesn't
have to do a futex wake.

> 
> >__synccall:
> >
> >Take thread list lock. Signal each thread individually with tkill.
> >Signaled threads no longer need to enqueue themselves on a list; they
> >only need to wait until the signaling thread tells them to run the
> >callback, and report back when they have finished it, which can be
> >done via a single futex indicating whose turn it is to run.
> >(Conceptually, this should not even be needed, since the signaling
> >thread can just signal in sequence, but the intent is to be robust
> >against spurious signals arriving from outside sources.) The idea is,
> >for each thread: (1) set futex value to its tid, (2) send signal, (3)
> >wait on futex to become 0 again. Signal handler simply returns if
> >futex value != its tid, then runs the callback, then zeros the futex
> >and performs a futex wake. Code should be tiny compared to now, and
> >need not pull in any dependency on semaphores, PI futexes, etc.
> 
> Wouldn't the handler also need to wait until *all* threads run the
> callback? Otherwise, a thread might continue execution while its uid
> still differs from uids of some other threads.

Yes, that's correct. We actually do need the current approach of first
capturing all the threads in a signal handler, then making them run
the callback, then releasing them to return, in three rounds. No
application code should be able to run with the process in a
partially-mutated state.

> In general, to my limited expertise, the design looks simple and
> clean. I'm not sure whether it's worth optimizing to reduce
> serialization pressure on pthread_create()/pthread_exit() because
> creating a large amount of short-lived threads doesn't look like a
> good idea anyway.

Yes. One thing I did notice is that the window where pthread_create
has to hold a lock to prevent new dlopen from happening is a lot
larger than the window where the thread list needs to be locked, and
contains mmap/mprotect. I think we should add a new "DTLS lock" here
that's held for the whole time, with a protocol that if you need both
the DTLS lock and the thread list lock, you take them in that order
(dlopen would also need them both). This reduces the thread list lock
window to just the __clone call and list update.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.