Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 11 Nov 2016 14:00:34 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mark Rutland <mark.rutland@....com>
Cc: kernel-hardening@...ts.openwall.com, Kees Cook <keescook@...omium.org>,
	Greg KH <gregkh@...uxfoundation.org>,
	Will Deacon <will.deacon@....com>,
	Elena Reshetova <elena.reshetova@...el.com>,
	Arnd Bergmann <arnd@...db.de>, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <h.peter.anvin@...el.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: Re: [RFC v4 PATCH 00/13] HARDENED_ATOMIC

On Fri, Nov 11, 2016 at 01:47:55PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 11, 2016 at 12:41:27PM +0000, Mark Rutland wrote:

> > Regardless of atomic_t semantics, a refcount_t would be far more obvious
> > to developers than atomic_t and/or kref, and better documents the intent
> > of code using it.
> > 
> > We'd still see abuse of atomic_t (and so this won't solve the problems
> > Kees mentioned), but even as something orthogonal I think that would
> > make sense to have.
> 
> Furthermore, you could implement that refcount_t stuff using
> atomic_cmpxchg() in generic code. While that is sub-optimal for ll/sc
> architectures you at least get generic code that works to get started.
> 
> Also, I suspect that if your refcounts are heavily contended, you'll
> have other problems than the performance of these primitives.
> 
> Code for refcount_inc(), refcount_inc_not_zero() and
> refcount_sub_and_test() can be copy-pasted from the kref patch I send
> yesterday.

A wee bit like so...

---
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
new file mode 100644
index 000000000000..d1eae0d2345e
--- /dev/null
+++ b/include/linux/refcount.h
@@ -0,0 +1,75 @@
+#ifndef _LINUX_REFCOUNT_H
+#define _LINUX_REFCOUNT_H
+
+#include <linux/atomic.h>
+
+typedef struct refcount_struct {
+	atomic_t refs;
+} refcount_t;
+
+static inline void refcount_inc(refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		WARN_ON_ONCE(!val);
+
+		new = val + 1;
+		if (new < val)
+			BUG(); /* overflow */
+
+		old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+}
+
+static inline bool refcount_inc_not_zero(refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		if (!val)
+			return false;
+
+		new = val + 1;
+		if (new < val)
+			BUG(); /* overflow */
+
+		old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	return true;
+}
+
+static inline bool refcount_sub_and_test(int i, refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		new = val - i;
+		if (new > val)
+			BUG(); /* underflow */
+
+		old = atomic_cmpxchg_release(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	return !new;
+}
+
+static inline bool refcount_dec_and_test(refcount_t *r)
+{
+	return refcount_sub_and_test(1, r);
+}
+
+#endif /* _LINUX_REFCOUNT_H */

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.