Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 19 May 2016 11:07:22 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Thomas Garnier <thgarnie@...gle.com>
Cc: Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
	Pranith Kumar <bobby.prani@...il.com>,
	David Howells <dhowells@...hat.com>, Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	David Woodhouse <David.Woodhouse@...el.com>,
	Petr Mladek <pmladek@...e.com>, Kees Cook <keescook@...omium.org>,
	Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
	Greg Thelen <gthelen@...gle.com>,
	kernel-hardening@...ts.openwall.com
Subject: Re: [RFC v1 2/2] mm: SLUB Freelist randomization

On Wed, May 18, 2016 at 12:12:13PM -0700, Thomas Garnier wrote:
> I thought the mix of slab_test & kernbench would show a diverse
> picture on perf data. Is there another test that you think would be
> useful?

Single thread testing on slab_test would be meaningful because it also
touch the slowpath. Problem is just unstable result of slab_test.

You can get more stable result of slab_test if you repeat same test
sometimes and get average result.

Please use following slab_test. It will do each operations 100000
times and repeat it 50 times.

https://github.com/JoonsooKim/linux/blob/slab_test_robust-next-20160509/mm/slab_test.c

I did a quick test for this patchset and get following result.

- Before (With patch and randomization is disabled by config)

Single thread testing
=====================
1. Kmalloc: Repeatedly allocate then free test
100000 times kmalloc(8) -> 42 cycles kfree -> 67 cycles
100000 times kmalloc(16) -> 43 cycles kfree -> 68 cycles
100000 times kmalloc(32) -> 47 cycles kfree -> 72 cycles
100000 times kmalloc(64) -> 54 cycles kfree -> 78 cycles
100000 times kmalloc(128) -> 75 cycles kfree -> 87 cycles
100000 times kmalloc(256) -> 84 cycles kfree -> 111 cycles
100000 times kmalloc(512) -> 82 cycles kfree -> 112 cycles
100000 times kmalloc(1024) -> 86 cycles kfree -> 113 cycles
100000 times kmalloc(2048) -> 113 cycles kfree -> 127 cycles
100000 times kmalloc(4096) -> 151 cycles kfree -> 154 cycles

- After (With patch and randomization is enabled by config)

Single thread testing
=====================
1. Kmalloc: Repeatedly allocate then free test
100000 times kmalloc(8) -> 51 cycles kfree -> 68 cycles
100000 times kmalloc(16) -> 57 cycles kfree -> 70 cycles
100000 times kmalloc(32) -> 70 cycles kfree -> 75 cycles
100000 times kmalloc(64) -> 95 cycles kfree -> 84 cycles
100000 times kmalloc(128) -> 142 cycles kfree -> 97 cycles
100000 times kmalloc(256) -> 150 cycles kfree -> 107 cycles
100000 times kmalloc(512) -> 151 cycles kfree -> 107 cycles
100000 times kmalloc(1024) -> 154 cycles kfree -> 110 cycles
100000 times kmalloc(2048) -> 230 cycles kfree -> 124 cycles
100000 times kmalloc(4096) -> 423 cycles kfree -> 165 cycles

It seems that performance decreases a lot but I don't care about it
because it is a security feature and I don't have a better idea.

Thanks.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.