Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 29 Oct 2020 10:43:00 -0400
From: Rich Felker <dalias@...c.org>
To: musl@...ts.openwall.com
Subject: Re: More thoughts on wrapping signal handling

On Thu, Oct 29, 2020 at 03:21:17PM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@...c.org> [2020-10-29 02:34:50 -0400]:
> > In "Re: [musl] Re: [PATCH] Make abort() AS-safe (Bug 26275)."
> > (20201010002612.GC17637@...ghtrain.aerifal.cx,
> > https://www.openwall.com/lists/musl/2020/10/10/1) I raised the
> > longstanding thought of having libc wrap signal handling. This is a
> > little bit of a big hammer for what it was proposed for -- fixing an
> > extremely-rare race between abort and execve -- but today I had a
> > thought about another use of it that's really compelling.
> > 
> > What I noted before was that, by wrapping signal handlers, libc could
> > implement a sort of "rollback" to restart a critical section that was
> > interrupted. However this really only has any use when the critical
> > section has no side effects aside from its final completion, and
> > except for execve where replacement of the process gives the atomic
> > cutoff for rollback, it requires __cp_end-like asm label of the end of
> > the critical section. So it's of limited utility.
> > 
> > However, what's more interesting than restarting the critical section
> > when a signal is received is *allowing it to complete* before handling
> > the signal. This can be implemented by having the wrapper, upon seeing
> > that it interrupted a critical section, save the siginfo_t in TLS and
> > immediately return, leaving signals blocked, without executing the
> > application-installed signal handler. Then, when leaving the critical
> > section, the unlock function can see the saved siginfo_t and call the
> > application's signal handler. Effectively, it's as if the signal were
> > just blocked until the end of the critical section.
> 
> this probably does not work with SIGSEGV and SIGBUS:
> execution likely cannot continue to leave the critical
> section, but the handlers must be invoked.

For async delivery of them (via kill, etc.) it's no problem. For
synchronous delivery, it means code inside libc faulted, and then you
can really do whatever you want since it means the caller invoked UB.
There's no contract for the appliction's SIGSEGV handler to run when
you pass an invalid pointer to libc functionr or otherwise invoke
"segfaulty" UB. But of course we could just execute them synchronously
in this case, rather than deferring, since the process is in an
undefined, unrecoverable state anyway.

> > What is the value in this?
> > 
> > 1. It eliminates the need for syscalls to mask and unmask signals
> >    around all existing AS-safe locks and critical sections that can't
> >    safely be interrupted by application code.
> > 
> > 2. It makes it so we can make almost any function that was AS-unsafe
> >    due to locking AS-safe, without any added cost. Even malloc can be
> >    AS-safe.
> 
> i guess this can introduce delay into signal handling,
> depending on how long libc internal locks are held.

Yes, I meant to mention that. It basically trades small controllable
delay in invocation of signal handlers (note: some such delay already
exists due to backing out of a blocking syscall in progress;
essentially the kernel is doing the same thing I talked about here,
but in kernelspace) to get the below:

> > 3. It makes it so a signal handler that fails to return promptly in
> >    one thread can't arbitrarily delay other threads waiting for
> >    libc-internal locks, because application code never interrupts our
> >    internal critical sections.
> > 
> > This last property, #3, is the really exciting one -- it means that,
> > short of swapping etc. (e.g. with mlockall and other realtime measures
> > taken) most libc locks can be considered as held only for very small
> > bounded time, rather than potentially-unbounded due to interruption by
> > signal.
> 
> sounds interesting.

Also note: while I think they're probably expensive enough this
wouldn't be a good default, for hard-realtime use one could patch musl
to use priority inheritance locks here.

One thing I glossed over is that I do think you need owner-tracking
locks for this to work. That's because the signal handler wrapper has
to be able to distinguish between the "waiting for lock" condition (in
which signal must not be deferred) and the "lock already acquired"
condition. Technically this could be done with asm introspection like
__cp_end, but I'd really prefer to avoid that; with lock ownership
tracked, the signal handler wrapper just needs to look at whether
*tls->pending_lock & 0x7fffffff == tls->tid.

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.