Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 31 Jul 2019 17:43:13 +0800
From: Jason Yan <yanaijie@...wei.com>
To: <mpe@...erman.id.au>, <linuxppc-dev@...ts.ozlabs.org>,
	<diana.craciun@....com>, <christophe.leroy@....fr>,
	<benh@...nel.crashing.org>, <paulus@...ba.org>, <npiggin@...il.com>,
	<keescook@...omium.org>, <kernel-hardening@...ts.openwall.com>
CC: <linux-kernel@...r.kernel.org>, <wangkefeng.wang@...wei.com>,
	<yebin10@...wei.com>, <thunder.leizhen@...wei.com>,
	<jingxiangfeng@...wei.com>, <fanchengyang@...wei.com>,
	<zhaohongjiang@...wei.com>, Jason Yan <yanaijie@...wei.com>
Subject: [PATCH v3 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan <yanaijie@...wei.com>
Cc: Diana Craciun <diana.craciun@....com>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Christophe Leroy <christophe.leroy@....fr>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Paul Mackerras <paulus@...ba.org>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: Kees Cook <keescook@...omium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@....fr>
---
 arch/powerpc/kernel/head_fsl_booke.S | 13 +++++++++++++
 arch/powerpc/mm/mmu_decl.h           |  1 +
 2 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 04d124fee17d..2083382dd662 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1143,6 +1143,19 @@ _GLOBAL(create_tlb_entry)
 	sync
 	blr
 
+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+	mfmsr	r7
+	rlwinm	r7, r7, 0, ~(MSR_IS | MSR_DS)
+
+	mtspr	SPRN_SRR0,r4
+	mtspr	SPRN_SRR1,r7
+	rfi
+
 /*
  * Create a tlb entry with the same effective and physical address as
  * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index a09f89d3aa0f..804da298beb3 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
 void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
+void reloc_kernel_entry(void *fdt, int addr);
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
-- 
2.17.2

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.