Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 24 May 2016 14:17:34 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Thomas Garnier <thgarnie@...gle.com>
Cc: Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
	Pranith Kumar <bobby.prani@...il.com>,
	David Howells <dhowells@...hat.com>, Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	David Woodhouse <David.Woodhouse@...el.com>,
	Petr Mladek <pmladek@...e.com>, Kees Cook <keescook@...omium.org>,
	Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
	Greg Thelen <gthelen@...gle.com>,
	kernel-hardening@...ts.openwall.com
Subject: Re: [RFC v1 2/2] mm: SLUB Freelist randomization

On Fri, May 20, 2016 at 09:24:35AM -0700, Thomas Garnier wrote:
> On Thu, May 19, 2016 at 7:15 PM, Joonsoo Kim <js1304@...il.com> wrote:
> > 2016-05-20 5:20 GMT+09:00 Thomas Garnier <thgarnie@...gle.com>:
> >> I ran the test given by Joonsoo and it gave me these minimum cycles
> >> per size across 20 usage:
> >
> > I can't understand what you did here. Maybe, it's due to my poor Engling.
> > Please explain more. You did single thread test? Why minimum cycles
> > rather than average?
> >
> 
> I used your version of slab_test and ran it 20 times for each
> versions. I compared
> the minimum number of cycles as an optimal case for comparison. As you said
> slab_test results can be unreliable. Comparing the average across multiple runs
> always gave odd results.

Hmm... With my version, slab_test results seems to be reliable for me. You
can use average result in this case. Anyway, your minimum result looks
odd even if my version is used. Large sized test would go slowpath
more frequently so it should be worse.

> 
> >> size,before,after
> >> 8,63.00,64.50 (102.38%)
> >> 16,64.50,65.00 (100.78%)
> >> 32,65.00,65.00 (100.00%)
> >> 64,66.00,65.00 (98.48%)
> >> 128,66.00,65.00 (98.48%)
> >> 256,64.00,64.00 (100.00%)
> >> 512,65.00,66.00 (101.54%)
> >> 1024,68.00,64.00 (94.12%)
> >> 2048,66.00,65.00 (98.48%)
> >> 4096,66.00,66.00 (100.00%)
> >
> > It looks like performance of all size classes are the same?
> >
> >> I assume the difference is bigger if you don't have RDRAND support.
> >
> > What does RDRAND means? Kconfig? How can I check if I have RDRAND?
> >
> 
> Sorry, I was referring to the usage of get_random_bytes_arch which
> will be faster
> if the test machine support specific instructions (like RDRAND).

Thanks! I checked that my test bed (QEMU) doesn't support rdrand.
(/proc/cpuinfo)

> >> Christoph, Joonsoo: Do you think it would be valuable to add a CONFIG
> >> to disable additional randomization per new page? It will remove
> >> additional entropy but increase performance for machines without arch
> >> specific randomization instructions.
> >
> > I don't think that it deserve another CONFIG. If performance is a matter,
> > I think that removing additional entropy is better until it is proved that
> > entropy is a problem.
> >
> 
> I will do more testing before the next RFC to decide the best approach.

Okay.

Thanks.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.