Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 7 Feb 2019 13:36:26 -0500
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Cc: Alexey Izbyshev <izbyshev@...ras.ru>
Subject: Re: __synccall: deadlock and reliance on racy /proc/self/task

On Sun, Feb 03, 2019 at 12:40:39AM +0300, Alexey Izbyshev wrote:
> Hello!
> 
> I've discovered that setuid() deadlocks on a simple stress test
> (attached: test-setuid.c) that creates threads concurrently with
> setuid(). (Tested on 1.1.21 on x86_64, kernel 4.15.x and 4.4.x). The
> gdb output:
> 
> (gdb) info thr
>   Id   Target Id         Frame
> * 1    LWP 23555 "a.out" __synccall (func=func@...ry=0x402a7d
> <do_setxid>, ctx=ctx@...ry=0x7fffea85b17c)
>     at ../../musl/src/thread/synccall.c:144
>   2    LWP 23566 "a.out" __syscall () at
> .../../musl/src/internal/x86_64/syscall.s:13
> (gdb) bt
> #0  __synccall (func=func@...ry=0x402a7d <do_setxid>,
> ctx=ctx@...ry=0x7fffea85b17c) at
> ../../musl/src/thread/synccall.c:144
> #1  0x0000000000402af9 in __setxid (nr=<optimized out>,
> id=<optimized out>, eid=<optimized out>, sid=<optimized out>)
>     at ../../musl/src/unistd/setxid.c:33
> #2  0x00000000004001c8 in main ()
> (gdb) thr 2
> (gdb) bt
> #0  __syscall () at ../../musl/src/internal/x86_64/syscall.s:13
> #1  0x00000000004046b7 in __timedwait_cp
> (addr=addr@...ry=0x7fe99023475c, val=val@...ry=-1, clk=clk@...ry=0,
> at=at@...ry=0x0,
>     priv=<optimized out>) at ../../musl/src/thread/__timedwait.c:31
> #2  0x0000000000404591 in sem_timedwait
> (sem=sem@...ry=0x7fe99023475c, at=at@...ry=0x0) at
> ../../musl/src/thread/sem_timedwait.c:23
> #3  0x00000000004044e1 in sem_wait (sem=sem@...ry=0x7fe99023475c) at
> .../../musl/src/thread/sem_wait.c:5
> #4  0x00000000004037ae in handler (sig=<optimized out>) at
> .../../musl/src/thread/synccall.c:43
> #5  <signal handler called>
> #6  __clone () at ../../musl/src/thread/x86_64/clone.s:17
> #7  0x00000000004028ec in __pthread_create (res=0x7fe990234eb8,
> attrp=0x606260 <attr>, entry=0x400300 <thr>, arg=0x0)
>     at ../../musl/src/thread/pthread_create.c:286
> #8  0x0000000000400323 in thr ()
> 
> The main thread spins in __synccall with futex() always returning
> ETIMEDOUT (line 139) and "head" is NULL, while handler() in the
> second thread is blocked on sem_wait() (line 40). So it looks like
> handler() updated the linked list, but the main thread doesn't see
> the update.

I don't understand how this state is reachable. Even if it were
reached once by failing to see the update to head, the update should
eventually be seen when retrying after timeout.

> For some reason __synccall accesses the list without a barrier (line
> 120), though I don't see why one wouldn't be necessary for correct
> observability of head->next. However, I'm testing on x86_64, so
> acquire/release semantics works without barriers.

The formal intent in musl is that all a_* are full seq_cst barriers.
On x86[_64] this used to not be the case; we just used a normal store,
but that turned out to be broken because in some places (and
apparently here in __synccall) there was code that depended on a_store
having acquire semantics too. See commit
3c43c0761e1725fd5f89a9c028cbf43250abb913 and
5a9c8c05a5a0cdced4122589184fd795b761bb4a.

If not for this fix, I could see this being related (but again, it
should see it after timeout anyway). But since the barrier is there
now, it shouldn't happen.

> I thought that a possible explanation is that handler() got blocked
> in a *previous* setuid() call, but we didn't notice its list entry
> at that time and then overwrote "head" with NULL on the current call
> to setuid(). This seems to be possible because of the following.
> 
> 1) There is a "presignalling" phase, where we may send a signal to
> *any* thread. Moreover, the number of signals we sent may be *more*
> than the number of threads because some threads may exit while we're
> in the loop. As a result, SIGSYNCCALL may be pending after this
> loop.
> 
>     /* Initially send one signal per counted thread. But since we can't
>      * synchronize with thread creation/exit here, there could be too
>      * few signals. This initial signaling is just an optimization, not
>      * part of the logic. */
>     for (i=libc.threads_minus_1; i; i--)
>         __syscall(SYS_kill, pid, SIGSYNCCALL);
> 
> 2) __synccall relies on /proc/self/task to get the list of *all*
> threads. However, since new threads can be created concurrently
> while we read /proc (if some threads were in pthread_thread() after
> __block_new_threads check when we set it to 1), I thought that /proc
> may miss some threads (that's actually why I started the whole
> exercise in the first place).
> 
> So, if we miss a thread in (2) but it's created and signalled with a
> pending SIGSYNCCALL shortly after we exit /proc loop (but before we
> reset the signal handler), "handler()" will run in that thread
> concurrently with us, and we may miss its list entry if the timing
> is right.

This seems plausible.

> I've checked that if I remove the "presignalling" loop, the deadlock
> disappears (at least, I could run the test for several minutes
> without any problem).
> 
> Of course, the larger problem remains: if we may miss some threads
> because of /proc, we may fail to call setuid() syscall in those
> threads. And that's indeed easily happens in my second test
> (attached: test-setuid-mismatch.c; expected to be run as a suid
> binary; note that I tested both with and without "presignalling").

Does it work if we force two iterations of the readdir loop with no
tasks missed, rather than just one, to catch the case of missed
concurrent additions? I'm not sure. But all this makes me really
uncomfortable with the current approach.

> Both tests run on glibc (2.27) without any problem.

I think glibc has a different problem: there's a window at thread exit
where setxid can return success without having run the id change in
the exiting thread. In this case, assuming an attacker has code
execution in the process after dropping root, they can mmap malicious
code over top of the thread exit code and obtain code execution as
root.

> Would it be
> possible to fix __synccall in musl? Thanks!

Yes, but I don't think it's easy.

I think we might really need to adopt the design I proposed a while
back with a global thread list that's unlocked atomically with respect
to thread exit, using the exit futex. This is somewhat
expensive/synchronizing, but it makes a lot of things easier/safer,
and optimizes access to dynamic TLS.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.