Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Sat, 31 Oct 2020 20:33:01 +0300
From: Alexey Izbyshev <izbyshev@...ras.ru>
To: musl@...ts.openwall.com
Subject: Re: More thoughts on wrapping signal handling

On 2020-10-29 16:38, Rich Felker wrote:
> On Thu, Oct 29, 2020 at 02:45:34PM +0300, Alexey Izbyshev wrote:
>> On 2020-10-29 09:34, Rich Felker wrote:
>> >In "Re: [musl] Re: [PATCH] Make abort() AS-safe (Bug 26275)."
>> >(20201010002612.GC17637@...ghtrain.aerifal.cx,
>> >https://www.openwall.com/lists/musl/2020/10/10/1) I raised the
>> >longstanding thought of having libc wrap signal handling. This is a
>> >little bit of a big hammer for what it was proposed for -- fixing an
>> >extremely-rare race between abort and execve -- but today I had a
>> >thought about another use of it that's really compelling.
>> >
>> >What I noted before was that, by wrapping signal handlers, libc could
>> >implement a sort of "rollback" to restart a critical section that was
>> >interrupted. However this really only has any use when the critical
>> >section has no side effects aside from its final completion, and
>> >except for execve where replacement of the process gives the atomic
>> >cutoff for rollback, it requires __cp_end-like asm label of the end of
>> >the critical section. So it's of limited utility.
>> >
>> >However, what's more interesting than restarting the critical section
>> >when a signal is received is *allowing it to complete* before handling
>> >the signal. This can be implemented by having the wrapper, upon seeing
>> >that it interrupted a critical section, save the siginfo_t in TLS and
>> >immediately return, leaving signals blocked, without executing the
>> >application-installed signal handler. Then, when leaving the critical
>> >section, the unlock function can see the saved siginfo_t and call the
>> >application's signal handler. Effectively, it's as if the signal were
>> >just blocked until the end of the critical section.
>> >
>> As described, that would call the application's signal handler on
>> the wrong stack in case SA_ONSTACK was used.
>> 
>> And what happens if the application wants to modify ucontext via the
>> third argument of the signal handler?
> 
> Yes, I kinda hand-waved over this with the word "call", which I
> thought about annotating with (*). In the case of SA_ONSTACK you need
> a primitive to "call on new stack", and while the ucontext is mostly
> not meaningful/inspectable to the signal handler (because it's
> interrupting libc code), the saved signal mask is. You can have the
> caller restore it (in place of SYS_[rt_]sigreturn), but the natural
> common solution to all of these needs is having a sort of makecontext.
> 
Such "sigcall/sigreturn" shims would have to emulate kernel behavior 
precisely. If a new feature is added into the kernel, and the 
application detects that it's supported based on what the *kernel* tells 
it, subtle breakage might occur due to imprecise emulation (as a random 
example, consider SS_AUTODISARM flag of sigaltstack()). So you'd have to 
intercept feature tests as well, and it starts to look messy IMO.

Re-raising the signal would avoid most of that emulation, but appears to 
be broken at least due to signal ordering issues as mentioned in 
https://www.openwall.com/lists/musl/2020/10/29/12.

Alexey

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.