Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 3 Oct 2016 12:27:01 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: kernel-hardening@...ts.openwall.com
Cc: keescook@...omium.org, Elena Reshetova <elena.reshetova@...el.com>,
 Hans Liljestrand <ishkamiel@...il.com>, David Windsor <dwindsor@...il.com>
Subject: Re: [RFC PATCH 12/13] x86: x86 implementation for
 HARDENED_ATOMIC

On 10/02/2016 11:41 PM, Elena Reshetova wrote:
>  static __always_inline void atomic_add(int i, atomic_t *v)
>  {
> -	asm volatile(LOCK_PREFIX "addl %1,%0"
> +	asm volatile(LOCK_PREFIX "addl %1,%0\n"
> +
> +#ifdef CONFIG_HARDENED_ATOMIC
> +		     "jno 0f\n"
> +		     LOCK_PREFIX "subl %1,%0\n"
> +		     "int $4\n0:\n"
> +		     _ASM_EXTABLE(0b, 0b)
> +#endif
> +
> +		     : "+m" (v->counter)
> +		     : "ir" (i));
> +}

Rather than doing all this assembly and exception stuff, could we just do:

static __always_inline void atomic_add(int i, atomic_t *v)
{
	if (atomic_add_unless(v, a, INT_MAX))
		BUG_ON_OVERFLOW_FOO()...
}

That way, there's also no transient state where somebody can have
observed the overflow before it is fixed up.  Granted, this
cmpxchg-based operation _is_ more expensive than the fast-path locked addl.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.