Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue,  3 May 2016 12:31:48 -0700
From: Thomas Garnier <>
To: "H . Peter Anvin" <>,
	Thomas Gleixner <>,
	Ingo Molnar <>,
	Borislav Petkov <>,
	Andy Lutomirski <>,
	Thomas Garnier <>,
	Dmitry Vyukov <>,
	Paolo Bonzini <>,
	Dan Williams <>,
	Kees Cook <>,
	Stephen Smalley <>,
	Kefeng Wang <>,
	Jonathan Corbet <>,
	Matt Fleming <>,
	Toshi Kani <>,
	Alexander Kuleshov <>,
	Alexander Popov <>,
	Joerg Roedel <>,
	Dave Young <>,
	Baoquan He <>,
	Dave Hansen <>,
	Mark Salter <>,
	Boris Ostrovsky <>
Subject: [PATCH v3 0/4] x86, boot: KASLR memory randomization

This is PATCH v3 for KASLR memory implementation for x86_64.

Recent changes:
    Add performance information on commit.
    Add details on PUD alignment.
    Add information on testing against the KASLR bypass exploit.
    Rebase on next-20160502.

The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published showing that static
memory can be overwitten to elevate privileges bypassing KASLR.

In more details:

   The physical memory mapping holds most allocations from boot and heap
   allocators. Knowning the base address and physical memory size, an
   attacker can deduce the PDE virtual address for the vDSO memory page.
   This attack was demonstrated at CanSecWest 2016, in the "Getting
   Physical Extreme Abuse of Intel Based Paged Systems" (see second part of the presentation). The
   exploits used against Linux worked successfuly against 4.6+ but fail
   with KASLR memory enabled ( Similar research
   was done at Google leading to this patch proposal. Variants exists to
   overwrite /proc or /sys objects ACLs leading to elevation of privileges.
   These variants were testeda against 4.6+.

This set of patches randomizes base address and padding of three
major memory sections (physical memory mapping, vmalloc & vmemmap).
It mitigates exploits relying on predictable kernel addresses. This
feature can be enabled with the CONFIG_RANDOMIZE_MEMORY option.

Padding for the memory hotplug support is managed by

The patches were tested on qemu & physical machines. Xen compatibility was
also verified. Multiple reboots were used to verify entropy for each
memory section.

***Problems that needed solving:
 - The three target memory sections are never at the same place between
 - The physical memory mapping can use a virtual address not aligned on
   the PGD page table.
 - Have good entropy early at boot before get_random_bytes is available.
 - Add optional padding for memory hotplug compatibility.

 - The first part prepares for the KASLR memory randomization by
   refactoring entropy functions used by the current implementation and
   support PUD level virtual addresses for physical mapping.
   (Patches 01-02)
 - The second part implements the KASLR memory randomization for all
   sections mentioned.
   (Patch 03)
 - The third part adds support for memory hotplug by adding an option to
   define the padding used between the physical memory mapping section
   and the others.
   (Patch 04)

Performance data:

Kernbench shows almost no difference (-+ less than 1%):


Average Optimal load -j 12 Run (std deviation):
Elapsed Time 102.63 (1.2695)
User Time 1034.89 (1.18115)
System Time 87.056 (0.456416)
Percent CPU 1092.9 (13.892)
Context Switches 199805 (3455.33)
Sleeps 97907.8 (900.636)


Average Optimal load -j 12 Run (std deviation):
Elapsed Time 102.489 (1.10636)
User Time 1034.86 (1.36053)
System Time 87.764 (0.49345)
Percent CPU 1095 (12.7715)
Context Switches 199036 (4298.1)
Sleeps 97681.6 (1031.11)

Hackbench shows 0% difference on average (hackbench 90
repeated 10 times):



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.