Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Sun, 03 Feb 2019 00:40:39 +0300
From: Alexey Izbyshev <izbyshev@...ras.ru>
To: musl@...ts.openwall.com
Subject: __synccall: deadlock and reliance on racy /proc/self/task

Hello!

I've discovered that setuid() deadlocks on a simple stress test 
(attached: test-setuid.c) that creates threads concurrently with 
setuid(). (Tested on 1.1.21 on x86_64, kernel 4.15.x and 4.4.x). The gdb 
output:

(gdb) info thr
   Id   Target Id         Frame
* 1    LWP 23555 "a.out" __synccall (func=func@...ry=0x402a7d 
<do_setxid>, ctx=ctx@...ry=0x7fffea85b17c)
     at ../../musl/src/thread/synccall.c:144
   2    LWP 23566 "a.out" __syscall () at 
../../musl/src/internal/x86_64/syscall.s:13
(gdb) bt
#0  __synccall (func=func@...ry=0x402a7d <do_setxid>, 
ctx=ctx@...ry=0x7fffea85b17c) at ../../musl/src/thread/synccall.c:144
#1  0x0000000000402af9 in __setxid (nr=<optimized out>, id=<optimized 
out>, eid=<optimized out>, sid=<optimized out>)
     at ../../musl/src/unistd/setxid.c:33
#2  0x00000000004001c8 in main ()
(gdb) thr 2
(gdb) bt
#0  __syscall () at ../../musl/src/internal/x86_64/syscall.s:13
#1  0x00000000004046b7 in __timedwait_cp 
(addr=addr@...ry=0x7fe99023475c, val=val@...ry=-1, clk=clk@...ry=0, 
at=at@...ry=0x0,
     priv=<optimized out>) at ../../musl/src/thread/__timedwait.c:31
#2  0x0000000000404591 in sem_timedwait (sem=sem@...ry=0x7fe99023475c, 
at=at@...ry=0x0) at ../../musl/src/thread/sem_timedwait.c:23
#3  0x00000000004044e1 in sem_wait (sem=sem@...ry=0x7fe99023475c) at 
../../musl/src/thread/sem_wait.c:5
#4  0x00000000004037ae in handler (sig=<optimized out>) at 
../../musl/src/thread/synccall.c:43
#5  <signal handler called>
#6  __clone () at ../../musl/src/thread/x86_64/clone.s:17
#7  0x00000000004028ec in __pthread_create (res=0x7fe990234eb8, 
attrp=0x606260 <attr>, entry=0x400300 <thr>, arg=0x0)
     at ../../musl/src/thread/pthread_create.c:286
#8  0x0000000000400323 in thr ()

The main thread spins in __synccall with futex() always returning 
ETIMEDOUT (line 139) and "head" is NULL, while handler() in the second 
thread is blocked on sem_wait() (line 40). So it looks like handler() 
updated the linked list, but the main thread doesn't see the update.

For some reason __synccall accesses the list without a barrier (line 
120), though I don't see why one wouldn't be necessary for correct 
observability of head->next. However, I'm testing on x86_64, so 
acquire/release semantics works without barriers.

I thought that a possible explanation is that handler() got blocked in a 
*previous* setuid() call, but we didn't notice its list entry at that 
time and then overwrote "head" with NULL on the current call to 
setuid(). This seems to be possible because of the following.

1) There is a "presignalling" phase, where we may send a signal to *any* 
thread. Moreover, the number of signals we sent may be *more* than the 
number of threads because some threads may exit while we're in the loop. 
As a result, SIGSYNCCALL may be pending after this loop.

     /* Initially send one signal per counted thread. But since we can't
      * synchronize with thread creation/exit here, there could be too
      * few signals. This initial signaling is just an optimization, not
      * part of the logic. */
     for (i=libc.threads_minus_1; i; i--)
         __syscall(SYS_kill, pid, SIGSYNCCALL);

2) __synccall relies on /proc/self/task to get the list of *all* 
threads. However, since new threads can be created concurrently while we 
read /proc (if some threads were in pthread_thread() after 
__block_new_threads check when we set it to 1), I thought that /proc may 
miss some threads (that's actually why I started the whole exercise in 
the first place).

So, if we miss a thread in (2) but it's created and signalled with a 
pending SIGSYNCCALL shortly after we exit /proc loop (but before we 
reset the signal handler), "handler()" will run in that thread 
concurrently with us, and we may miss its list entry if the timing is 
right.

I've checked that if I remove the "presignalling" loop, the deadlock 
disappears (at least, I could run the test for several minutes without 
any problem).

Of course, the larger problem remains: if we may miss some threads 
because of /proc, we may fail to call setuid() syscall in those threads. 
And that's indeed easily happens in my second test (attached: 
test-setuid-mismatch.c; expected to be run as a suid binary; note that I 
tested both with and without "presignalling").

Both tests run on glibc (2.27) without any problem. Would it be possible 
to fix __synccall in musl? Thanks!

(Please CC me on answering, I'm not subscribed to the list).

Alexey
View attachment "test-setuid.c" of type "text/x-c" (656 bytes)

View attachment "test-setuid-mismatch.c" of type "text/x-c" (1162 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.