Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 12 Nov 2016 00:07:50 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mark Rutland <mark.rutland@....com>
Cc: kernel-hardening@...ts.openwall.com, Kees Cook <keescook@...omium.org>,
	Greg KH <gregkh@...uxfoundation.org>,
	Will Deacon <will.deacon@....com>,
	Elena Reshetova <elena.reshetova@...el.com>,
	Arnd Bergmann <arnd@...db.de>, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <h.peter.anvin@...el.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: Re: [RFC v4 PATCH 00/13] HARDENED_ATOMIC

On Fri, Nov 11, 2016 at 02:00:34PM +0100, Peter Zijlstra wrote:
> +static inline bool refcount_sub_and_test(int i, refcount_t *r)
> +{
> +	unsigned int old, new, val = atomic_read(&r->refs);
> +
> +	for (;;) {

regardless of the sub_and_test vs inc_and_test issue, this should
probably also have:

		if (val == UINT_MAX)
			return false;

such that we stay saturated. If for some reason someone can trigger more
dec's than inc's, we'd be hosed.

> +		new = val - i;
> +		if (new > val)
> +			BUG(); /* underflow */
> +
> +		old = atomic_cmpxchg_release(&r->refs, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	return !new;
> +}

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.