Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 12 Feb 2019 13:26:25 -0500
From: Rich Felker <>
Subject: Draft outline of thread-list design

Here's a draft of the thread-list design, proposed previously as a
better way to do dynamic TLS installation, and now as a solution to
the problem of __synccall's use of /proc/self/task being (apparently
hopelessly) broken:

Goal of simplicity and correctness, not micro-optimizing.

List lock is fully AS-safe. Taking lock requires signals be blocked.
Could be an rwlock, where only thread creation and exit require the
write lock, but this is not necessary for correctness, only as a
possible optimization if other operations with high concurrency
needing access would benefit.


Take lock, create new thread, on success add to list, unlock. New
thread has new responsibility of unblocking signals, since it inherits
a fully-blocked signal mask from the parent holding the lock. New
thread should be created with its tid address equal to the thread list
lock's address, so that set_tid_address never needs to be called
later. This simplifies logic that previously had to be aware of detach
state and adjust the exit futex address accordingly to be safe against
clobbering freed memory.


Take lock. If this is the last thread, unlock and call exit(0).
Otherwise, do cleanup work, set state to exiting, remove self from
list. List will be unlocked when the kernel task exits. Unfortunately
there can be a nontrivial (non-constant) amount of cleanup work to do
if the thread left locks held, but since this should not happen in
correct code, it probably doesn't matter.

pthread_kill, pthread_[gs]etsched(param|prio):

These could remain as they are (would require keeping the kill lock
separate in pthread_exit, not described above), or could be modified
to use the global thread list lock. The former optimized these
functions slightly; the latter optimizes thread exit (by reducing
number of locks involved).


A joiner can no longer see the exit of the individual kernel thread
via the exit futex (detach_state), so after seeing it in an exiting
state, it must instead use the thread list to confirm completion of
exit. The obvious way to do this is by taking a lock on the list and
immediately releasing it, but the actual taking of the lock can be
elided by simply doing a futex wait on the lock owner being equal to
the tid (or an exit sequence number if we prefer that) of the exiting
thread. In the case of tid reuse collisions, at worse this reverts to
the cost of waiting for the lock to be released.


Take thread list lock in place of __inhibit_ptc. Thread list can
subsequently be used to install new DTLS in all existing threads, and
__tls_get_addr/tlsdesc functions can be streamlined.


Take thread list lock. Signal each thread individually with tkill.
Signaled threads no longer need to enqueue themselves on a list; they
only need to wait until the signaling thread tells them to run the
callback, and report back when they have finished it, which can be
done via a single futex indicating whose turn it is to run.
(Conceptually, this should not even be needed, since the signaling
thread can just signal in sequence, but the intent is to be robust
against spurious signals arriving from outside sources.) The idea is,
for each thread: (1) set futex value to its tid, (2) send signal, (3)
wait on futex to become 0 again. Signal handler simply returns if
futex value != its tid, then runs the callback, then zeros the futex
and performs a futex wake. Code should be tiny compared to now, and
need not pull in any dependency on semaphores, PI futexes, etc.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.