Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 3 Oct 2016 11:47:53 +0200
From: Jann Horn <jann@...jh.net>
To: Elena Reshetova <elena.reshetova@...el.com>
Cc: kernel-hardening@...ts.openwall.com, keescook@...omium.org,
	Hans Liljestrand <ishkamiel@...il.com>,
	David Windsor <dwindsor@...il.com>
Subject: Re: [RFC PATCH 12/13] x86: x86 implementation for
 HARDENED_ATOMIC

On Mon, Oct 03, 2016 at 09:41:25AM +0300, Elena Reshetova wrote:
> This adds x86-specific code in order to support
> HARDENED_ATOMIC feature. When overflow is detected
> in atomic_t or atomic_long_t types, the counter is
> decremented back by one (to keep it at INT_MAX or
> LONG_MAX) and issue is reported using BUG().
> The side effect is that in both legitimate and
> non-legitimate cases a counter cannot wrap.
> 
> Signed-off-by: Elena Reshetova <elena.reshetova@...el.com>
> Signed-off-by: Hans Liljestrand <ishkamiel@...il.com>
> Signed-off-by: David Windsor <dwindsor@...il.com>
> ---
[...]
>  static __always_inline void atomic_add(int i, atomic_t *v)
>  {
> -	asm volatile(LOCK_PREFIX "addl %1,%0"
> +	asm volatile(LOCK_PREFIX "addl %1,%0\n"
> +
> +#ifdef CONFIG_HARDENED_ATOMIC
> +		     "jno 0f\n"
> +		     LOCK_PREFIX "subl %1,%0\n"
> +		     "int $4\n0:\n"
> +		     _ASM_EXTABLE(0b, 0b)
> +#endif
> +
> +		     : "+m" (v->counter)
> +		     : "ir" (i));
> +}

It might make sense to point out in the Kconfig entry
that on X86, this can only be relied on if
kernel.panic_on_oops==1 because otherwise, you can
(depending on the bug, in a worst-case scenario) get
past 0x7fffffff within seconds using multiple racing
processes.
(See https://bugs.chromium.org/p/project-zero/issues/detail?id=856 .)


An additional idea for future development:

One way to work around that would be to interpret the
stored value 2^30 as zero, and interpret other values
accordingly. Like this:

#define SIGNED_ATOMIC_BASE 0x40000000U

static __always_inline int atomic_read(const atomic_t *v)
{
  return READ_ONCE((v)->counter) - SIGNED_ATOMIC_BASE;
}

static __always_inline void atomic_set(atomic_t *v, int i)
{
  WRITE_ONCE(v->counter, i + SIGNED_ATOMIC_BASE);
}

static __always_inline int atomic_add_return(int i, atomic_t *v)
{
  return i + xadd_check_overflow(&v->counter, i) - SIGNED_ATOMIC_BASE;
}

With this change, atomic_t could still be used as a signed integer
with half the range of an int, but its stored value would only
become negative on overflow. Then, the "jno" instruction in the
hardening code could be replaced with "jns" to reliably block
overflows.

The downsides of this approach would be:
 - One extra increment or decrement every time an atomic_t is read
   or written. This should be relatively cheap - it should be
   operating on a register -, but it's still not ideal. atomic_t
   users could perhaps opt out with something like
   atomic_unsigned_t.
 - Implicit atomic_t initialization to zero by zeroing memory
   would stop working. This would probably be the biggest issue
   with this approach.

I think that unfortunately, there are a large number of atomic_t
users that don't explicitly initialize the atomic_t and instead
rely on implicit initialization to zero, and changing that would
cause a lot of code churn - so while this would IMO improve the
mitigation, this series should IMO be merged without it and
instead have a small warning in the Kconfig entry or so.

Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.