Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 22 Jan 2018 10:04:29 -0800
From: Andy Lutomirski <>
	LKML <>
Cc: Linus Torvalds <>,
	Greg Kroah-Hartman <>,
	Alan Cox <>,
	Jann Horn <>,
	Samuel Neves <>,
	Dan Williams <>,
	Kernel Hardening <>,
	Borislav Petkov <>,
	Andy Lutomirski <>
Subject: [PATCH] x86/retpoline/entry: Disable the entire SYSCALL64 fast path with retpolines on

The existing retpoline code carefully and awkwardly retpolinifies
the SYSCALL64 slow path.  This stops the fast path from being
particularly fast, and it's IMO rather messy.

Instead, just bypass the fast path entirely if retpolines are on.
This seems to be a speedup on a "minimal" retpoline kernel, mainly
because do_syscall_64() ends up calling the syscall handler without
using a slow retpoline thunk.

As an added benefit, we won't need to apply further Spectre
mitigations to the fast path.  The current fast path spectre
mitigations may have a hole: if the syscall nr is out of bounds, it
is plausible that the CPU would mispredict the bounds check and,
load a bogus function pointer, and speculatively execute it right
though the retpoline.  If this is indeed a problem, we need to fix
it in the slow paths anyway, but with this patch applied, we can at
least leave the fast path alone.

Cleans-up: 2641f08bb7fc ("x86/retpoline/entry: Convert entry assembler indirect jumps")
Signed-off-by: Andy Lutomirski <>
 arch/x86/entry/entry_64.S | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 4f8e1d35a97c..b915bad58754 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -245,6 +245,9 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
 	 * If we need to do entry work or if we guess we'll need to do
 	 * exit work, go straight to the slow path.
+	ALTERNATIVE "", "jmp entry_SYSCALL64_slow_path", X86_FEATURE_RETPOLINE
 	movq	PER_CPU_VAR(current_task), %r11
 	jnz	entry_SYSCALL64_slow_path
@@ -270,13 +273,11 @@ entry_SYSCALL_64_fastpath:
 	 * This call instruction is handled specially in stub_ptregs_64.
 	 * It might end up jumping to the slow path.  If it jumps, RAX
 	 * and all argument registers are clobbered.
+	 *
+	 * NB: no retpoline needed -- we don't execute this code with
+	 * retpolines enabled.
-	movq	sys_call_table(, %rax, 8), %rax
-	call	__x86_indirect_thunk_rax
 	call	*sys_call_table(, %rax, 8)
 	movq	%rax, RAX(%rsp)
@@ -431,6 +432,9 @@ ENTRY(stub_ptregs_64)
 	 * which we achieve by trying again on the slow path.  If we are on
 	 * the slow path, the extra regs are already saved.
+	 * This code is unreachable (even via mispredicted conditional branches)
+	 * if we're using retpolines.
+	 *
 	 * RAX stores a pointer to the C function implementing the syscall.
 	 * IRQs are on.
@@ -448,12 +452,19 @@ ENTRY(stub_ptregs_64)
 	jmp	entry_SYSCALL64_slow_path
-	JMP_NOSPEC %rax				/* Called from C */
+	jmp	*%rax				/* Called from C */
 .macro ptregs_stub func
+	/*
+	 * If retpolines are enabled, we don't use the syscall fast path,
+	 * so just jump straight to the syscall body.
+	 */
+	ALTERNATIVE "", __stringify(jmp \func), X86_FEATURE_RETPOLINE
 	leaq	\func(%rip), %rax
 	jmp	stub_ptregs_64

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.