Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Jan 2018 18:03:01 -0800
From: Kees Cook <>
Cc: Kees Cook <>,
	David Windsor <>,
	Ingo Molnar <>,
	Andrew Morton <>,
	Thomas Gleixner <>,
	Andy Lutomirski <>,
	Linus Torvalds <>,
	Alexander Viro <>,
	Christoph Hellwig <>,
	Christoph Lameter <>,
	"David S. Miller" <>,
	Laura Abbott <>,
	Mark Rutland <>,
	"Martin K. Petersen" <>,
	Paolo Bonzini <>,
	Christian Borntraeger <>,
	Christoffer Dall <>,
	Dave Kleikamp <>,
	Jan Kara <>,
	Luis de Bethencourt <>,
	Marc Zyngier <>,
	Rik van Riel <>,
	Matthew Garrett <>,,,,,
Subject: [PATCH 29/38] fork: Define usercopy region in mm_struct slab caches

From: David Windsor <>

In support of usercopy hardening, this patch defines a region in the
mm_struct slab caches in which userspace copy operations are allowed.
Only the auxv field is copied to userspace.

cache object allocation:
        #define allocate_mm()     (kmem_cache_alloc(mm_cachep, GFP_KERNEL))

            mm = allocate_mm();




example usage trace:

            elf_info = (elf_addr_t *)current->mm->saved_auxv;
            copy_to_user(..., elf_info, ei_index * sizeof(elf_addr_t))


This region is known as the slab cache's usercopy region. Slab caches
can now check that each dynamically sized copy operation involving
cache-managed memory falls entirely within the slab's usercopy region.

This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY
whitelisting code in the last public patch of grsecurity/PaX based on my
understanding of the code. Changes or omissions from the original code are
mine and don't reflect the original grsecurity/PaX code.

Signed-off-by: David Windsor <>
[kees: adjust commit log, split patch, provide usage trace]
Cc: Ingo Molnar <>
Cc: Andrew Morton <>
Cc: Thomas Gleixner <>
Cc: Andy Lutomirski <>
Signed-off-by: Kees Cook <>
Acked-by: Rik van Riel <>
 kernel/fork.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/fork.c b/kernel/fork.c
index 432eadf6b58c..82f2a0441d3b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2225,9 +2225,11 @@ void __init proc_caches_init(void)
 	 * maximum number of CPU's we can ever have.  The cpumask_allocation
 	 * is at the end of the structure, exactly for that reason.
-	mm_cachep = kmem_cache_create("mm_struct",
+	mm_cachep = kmem_cache_create_usercopy("mm_struct",
 			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
+			offsetof(struct mm_struct, saved_auxv),
+			sizeof_field(struct mm_struct, saved_auxv),
 	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT);

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.