Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 15 Feb 2019 00:16:39 +0300
From: Alexey Izbyshev <izbyshev@...ras.ru>
To: musl@...ts.openwall.com
Subject: Re: Draft outline of thread-list design

On 2019-02-12 21:26, Rich Felker wrote:
> pthread_join:
> 
> A joiner can no longer see the exit of the individual kernel thread
> via the exit futex (detach_state), so after seeing it in an exiting
> state, it must instead use the thread list to confirm completion of
> exit. The obvious way to do this is by taking a lock on the list and
> immediately releasing it, but the actual taking of the lock can be
> elided by simply doing a futex wait on the lock owner being equal to
> the tid (or an exit sequence number if we prefer that) of the exiting
> thread. In the case of tid reuse collisions, at worse this reverts to
> the cost of waiting for the lock to be released.
> 
Since the kernel wakes only a single thread waiting on ctid address, 
wouldn't the joiner still need to do a futex wake to unblock other 
potential waiters even if it doesn't actually take the lock by changing 
*ctid?

> __synccall:
> 
> Take thread list lock. Signal each thread individually with tkill.
> Signaled threads no longer need to enqueue themselves on a list; they
> only need to wait until the signaling thread tells them to run the
> callback, and report back when they have finished it, which can be
> done via a single futex indicating whose turn it is to run.
> (Conceptually, this should not even be needed, since the signaling
> thread can just signal in sequence, but the intent is to be robust
> against spurious signals arriving from outside sources.) The idea is,
> for each thread: (1) set futex value to its tid, (2) send signal, (3)
> wait on futex to become 0 again. Signal handler simply returns if
> futex value != its tid, then runs the callback, then zeros the futex
> and performs a futex wake. Code should be tiny compared to now, and
> need not pull in any dependency on semaphores, PI futexes, etc.

Wouldn't the handler also need to wait until *all* threads run the 
callback? Otherwise, a thread might continue execution while its uid 
still differs from uids of some other threads.

In general, to my limited expertise, the design looks simple and clean. 
I'm not sure whether it's worth optimizing to reduce serialization 
pressure on pthread_create()/pthread_exit() because creating a large 
amount of short-lived threads doesn't look like a good idea anyway.

Alexey

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.