Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 24 Aug 2017 14:13:38 -0700
From: Thomas Garnier <thgarnie@...gle.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Herbert Xu <herbert@...dor.apana.org.au>, "David S . Miller" <davem@...emloft.net>, 
	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H . Peter Anvin" <hpa@...or.com>, 
	Peter Zijlstra <peterz@...radead.org>, Josh Poimboeuf <jpoimboe@...hat.com>, 
	Arnd Bergmann <arnd@...db.de>, Matthias Kaehlcke <mka@...omium.org>, 
	Boris Ostrovsky <boris.ostrovsky@...cle.com>, Juergen Gross <jgross@...e.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, Radim Krčmář <rkrcmar@...hat.com>, 
	Joerg Roedel <joro@...tes.org>, Tom Lendacky <thomas.lendacky@....com>, 
	Andy Lutomirski <luto@...nel.org>, Borislav Petkov <bp@...e.de>, Brian Gerst <brgerst@...il.com>, 
	"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, "Rafael J . Wysocki" <rjw@...ysocki.net>, 
	Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>, Tejun Heo <tj@...nel.org>, 
	Christoph Lameter <cl@...ux.com>, Paul Gortmaker <paul.gortmaker@...driver.com>, 
	Chris Metcalf <cmetcalf@...lanox.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>, Nicolas Pitre <nicolas.pitre@...aro.org>, 
	Christopher Li <sparse@...isli.org>, "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>, 
	Lukas Wunner <lukas@...ner.de>, Mika Westerberg <mika.westerberg@...ux.intel.com>, 
	Dou Liyang <douly.fnst@...fujitsu.com>, Daniel Borkmann <daniel@...earbox.net>, 
	Alexei Starovoitov <ast@...nel.org>, Masahiro Yamada <yamada.masahiro@...ionext.com>, 
	Markus Trippelsdorf <markus@...ppelsdorf.de>, Steven Rostedt <rostedt@...dmis.org>, 
	Kees Cook <keescook@...omium.org>, Rik van Riel <riel@...hat.com>, 
	David Howells <dhowells@...hat.com>, Waiman Long <longman@...hat.com>, Kyle Huey <me@...ehuey.com>, 
	Peter Foley <pefoley2@...oley.com>, Tim Chen <tim.c.chen@...ux.intel.com>, 
	Catalin Marinas <catalin.marinas@....com>, Ard Biesheuvel <ard.biesheuvel@...aro.org>, 
	Michal Hocko <mhocko@...e.com>, Matthew Wilcox <mawilcox@...rosoft.com>, 
	"H . J . Lu" <hjl.tools@...il.com>, Paul Bolle <pebolle@...cali.nl>, Rob Landley <rob@...dley.net>, 
	Baoquan He <bhe@...hat.com>, Daniel Micay <danielmicay@...il.com>, 
	"the arch/x86 maintainers" <x86@...nel.org>, Linux Crypto Mailing List <linux-crypto@...r.kernel.org>, 
	LKML <linux-kernel@...r.kernel.org>, xen-devel@...ts.xenproject.org, 
	kvm list <kvm@...r.kernel.org>, Linux PM list <linux-pm@...r.kernel.org>, 
	linux-arch <linux-arch@...r.kernel.org>, linux-sparse@...r.kernel.org, 
	Kernel Hardening <kernel-hardening@...ts.openwall.com>, 
	Linus Torvalds <torvalds@...ux-foundation.org>, Peter Zijlstra <a.p.zijlstra@...llo.nl>, 
	Borislav Petkov <bp@...en8.de>
Subject: Re: x86: PIE support and option to extend KASLR randomization

On Thu, Aug 17, 2017 at 7:10 AM, Thomas Garnier <thgarnie@...gle.com> wrote:
>
> On Thu, Aug 17, 2017 at 1:09 AM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> >
> > * Thomas Garnier <thgarnie@...gle.com> wrote:
> >
> > > > > -model=small/medium assume you are on the low 32-bit. It generates
> > > > > instructions where the virtual addresses have the high 32-bit to be zero.
> > > >
> > > > How are these assumptions hardcoded by GCC? Most of the instructions should be
> > > > relocatable straight away, as most call/jump/branch instructions are
> > > > RIP-relative.
> > >
> > > I think PIE is capable to use relative instructions well. mcmodel=large assumes
> > > symbols can be anywhere.
> >
> > So if the numbers in your changelog and Kconfig text cannot be trusted, there's
> > this description of the size impact which I suspect is less susceptible to
> > measurement error:
> >
> > +         The kernel and modules will generate slightly more assembly (1 to 2%
> > +         increase on the .text sections). The vmlinux binary will be
> > +         significantly smaller due to less relocations.
> >
> > ... but describing a 1-2% kernel text size increase as "slightly more assembly"
> > shows a gratituous disregard to kernel code generation quality! In reality that's
> > a huge size increase that in most cases will almost directly transfer to a 1-2%
> > slowdown for kernel intense workloads.
> >
> >
> > Where does that size increase come from, if PIE is capable of using relative
> > instructins well? Does it come from the loss of a generic register and the
> > resulting increase in register pressure, stack spills, etc.?
>
> I will try to gather more information on the size increase. The size
> increase might be smaller with gcc 4.9 given performance was much
> better.

Coming back on this thread as I identified the root cause of the
performance issue.

My original performance testing was done with an Ubuntu generic
configuration. This configuration has the CONFIG_FUNCTION_TRACER
option which was incompatible with PIE. The tracer failed to replace
the __fentry__ call by a nop slide on each traceable function because
the instruction was not the one expected. If PIE is enabled, gcc
generates a difference call instruction based on the GOT without
checking the visibility options (basically call *__fentry__@...PCREL).

With the fix for function tracing, the hackbench results have an
average of +0.8 to +1.4% (from +8% to +10% before). With a default
configuration, the numbers are closer to 0.8%.

On the .text size, with gcc 4.9 I see +0.8% on default configuration
and +1.180% on the ubuntu configuration.

Next iteration should have an updated set of performance metrics (will
try to use gcc 6.0 or higher) and incorporate the fix on function
tracing.

Let me know if you have questions and feedback.

>
> >
> > So I'm still unhappy about this all, and about the attitude surrounding it.
> >
> > Thanks,
> >
> >         Ingo
>
>
>
>
> --
> Thomas




-- 
Thomas

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.