Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612B41BFF727@IRSMSX102.ger.corp.intel.com>
Date: Fri, 11 Nov 2016 09:32:45 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: Peter Zijlstra <peterz@...radead.org>, Kees Cook <keescook@...omium.org>
CC: "kernel-hardening@...ts.openwall.com"
	<kernel-hardening@...ts.openwall.com>, Arnd Bergmann <arnd@...db.de>, Thomas
 Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "Anvin, H
 Peter" <h.peter.anvin@...el.com>, Will Deacon <will.deacon@....com>, Hans
 Liljestrand <ishkamiel@...il.com>, "David Windsor" <dwindsor@...il.com>
Subject: RE: [RFC v4 PATCH 12/13] x86: implementation for HARDENED_ATOMIC

>On Thu, Nov 10, 2016 at 03:07:37PM -0800, Kees Cook wrote:
> On Thu, Nov 10, 2016 at 2:50 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Thu, Nov 10, 2016 at 09:40:46PM +0100, Peter Zijlstra wrote:
> >> On Thu, Nov 10, 2016 at 10:24:47PM +0200, Elena Reshetova wrote:
> >> >  static __always_inline void atomic_add(int i, atomic_t *v)  {
> >> > +   asm volatile(LOCK_PREFIX "addl %1,%0\n"
> >> > +
> >> > +#ifdef CONFIG_HARDENED_ATOMIC
> >> > +                "jno 0f\n"
> >> > +                LOCK_PREFIX "subl %1,%0\n"
> >> > +                "int $4\n0:\n"
> >> > +                _ASM_EXTABLE(0b, 0b)
> >>
> >>
> >> This is unreadable gunk.
> >
> > Worse, this is fundamentally racy and therefore not a proper atomic 
> > at all.
> 
> The race was discussed earlier as well

>Which is useless, since nobody was listening etc..

> (and some of that should
> already have been covered in the commit message)

>it does not.

It is covered in the documentation file (Documentation/security/
hardened-atomic.txt) included in the first patch.
There are just so many details that including them all in commit
messages will produce pages long commit messages, so some
things are moved to the documentation only. 

> , but basically this
> is a race in the overflow protection, so it's not operationally a 
> problem. The process that caused the overflow still gets killed, and 
> if the system isn't already set up to panic on oops, this becomes a 
> resource leak instead of an exploitable condition.

>Now is this harmless? If you have two increments racing like:

	  inc
	  jno 1 // overflow

			  inc
			  jno 1 // !overflow

	  dec
	1:		1:

>The second thread will still affect your wrap and not BUG.

> 
> > The only way to do this is a cmpxchg loop and not issue the result 
> > on overflow. Of course, that would completely suck performance wise, 
> > but having a non atomic atomic blows more.
> 
> There was some work to examine this performance impact, and it didn't 
> look terrible, but this race-on-overflow isn't viewed as a meaningful 
> risk in the refcount situation.

>I have a benchmark somewhere, I can run numbers tomorrow, but it really shows once you get a bit of contention going. Once you hit 4 nodes contending on a variable its completely out there IIRC.
This would help to get more numbers on this, thank you. 

>The unconditional atomic ops really are loads faster than cmpxchg loops.

Yes, and this is what we saw when doing performance measurements. So, as a result we went with a faster method, which we believed still has a low risk of getting into race. 
If you numbers prove otherwise, then we have to reconsider. 

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.