Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 30 Apr 2019 04:17:18 -0700
From: tip-bot for Nadav Amit <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: bp@...en8.de, tglx@...utronix.de, hpa@...or.com,
        rick.p.edgecombe@...el.com, peterz@...radead.org, mingo@...nel.org,
        akpm@...ux-foundation.org, will.deacon@....com, luto@...nel.org,
        ard.biesheuvel@...aro.org, deneen.t.dock@...el.com,
        linux-kernel@...r.kernel.org, namit@...are.com, riel@...riel.com,
        kristen@...ux.intel.com, linux_dti@...oud.com,
        torvalds@...ux-foundation.org, dave.hansen@...ux.intel.com,
        kernel-hardening@...ts.openwall.com
Subject: [tip:x86/mm] x86/mm: Save debug registers when loading a temporary
 mm

Commit-ID:  d97080ebed7811a53c931032a284166ee46b9565
Gitweb:     https://git.kernel.org/tip/d97080ebed7811a53c931032a284166ee46b9565
Author:     Nadav Amit <namit@...are.com>
AuthorDate: Thu, 25 Apr 2019 17:11:24 -0700
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 30 Apr 2019 12:37:50 +0200

x86/mm: Save debug registers when loading a temporary mm

Prevent user watchpoints from mistakenly firing while the temporary mm
is being used. As the addresses of the temporary mm might overlap those
of the user-process, this is necessary to prevent wrong signals or worse
things from happening.

Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: <akpm@...ux-foundation.org>
Cc: <ard.biesheuvel@...aro.org>
Cc: <deneen.t.dock@...el.com>
Cc: <kernel-hardening@...ts.openwall.com>
Cc: <kristen@...ux.intel.com>
Cc: <linux_dti@...oud.com>
Cc: <will.deacon@....com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Rik van Riel <riel@...riel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: https://lkml.kernel.org/r/20190426001143.4983-5-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 arch/x86/include/asm/mmu_context.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 24dc3b810970..93dff1963337 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -13,6 +13,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/mpx.h>
+#include <asm/debugreg.h>
 
 extern atomic64_t last_mm_ctx_id;
 
@@ -380,6 +381,21 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
 	lockdep_assert_irqs_disabled();
 	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
 	switch_mm_irqs_off(NULL, mm, current);
+
+	/*
+	 * If breakpoints are enabled, disable them while the temporary mm is
+	 * used. Userspace might set up watchpoints on addresses that are used
+	 * in the temporary mm, which would lead to wrong signals being sent or
+	 * crashes.
+	 *
+	 * Note that breakpoints are not disabled selectively, which also causes
+	 * kernel breakpoints (e.g., perf's) to be disabled. This might be
+	 * undesirable, but still seems reasonable as the code that runs in the
+	 * temporary mm should be short.
+	 */
+	if (hw_breakpoint_active())
+		hw_breakpoint_disable();
+
 	return temp_state;
 }
 
@@ -387,6 +403,13 @@ static inline void unuse_temporary_mm(temp_mm_state_t prev_state)
 {
 	lockdep_assert_irqs_disabled();
 	switch_mm_irqs_off(NULL, prev_state.mm, current);
+
+	/*
+	 * Restore the breakpoints if they were disabled before the temporary mm
+	 * was loaded.
+	 */
+	if (hw_breakpoint_active())
+		hw_breakpoint_restore();
 }
 
 #endif /* _ASM_X86_MMU_CONTEXT_H */

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.