Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 19 Dec 2016 11:12:43 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Reshetova, Elena" <elena.reshetova@...el.com>
Cc: Liljestrand Hans <ishkamiel@...il.com>,
	"kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com>,
	Greg KH <gregkh@...uxfoundation.org>,
	Kees Cook <keescook@...omium.org>,
	"will.deacon@....com" <will.deacon@....com>,
	Boqun Feng <boqun.feng@...il.com>,
	David Windsor <dwindsor@...il.com>, "aik@...abs.ru" <aik@...abs.ru>,
	"david@...son.dropbear.id.au" <david@...son.dropbear.id.au>
Subject: Re: Conversion from atomic_t to refcount_t: summary of issues

On Mon, Dec 19, 2016 at 07:55:15AM +0000, Reshetova, Elena wrote:
> Well, again, you are right in theory, but in practice for example for struct sched_group { atomic_t ref; ... }:
> 
> http://lxr.free-electrons.com/source/kernel/sched/core.c#L6178
> 
> To me this is a refcounter that needs the protection.

Only if you have more than UINT_MAX CPUs or something like that.

And if you really really want to use refcount_t there, you could +1 the
scheme and it'd work again.

One could also split the refcount and initialized state and avoid the
problem that way.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.