Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 2 Aug 2016 22:47:27 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Thomas Garnier <thgarnie@...gle.com>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>, Thomas Gleixner <tglx@...utronix.de>, 
	Ingo Molnar <mingo@...hat.com>, "H . Peter Anvin" <hpa@...or.com>, Kees Cook <keescook@...omium.org>, 
	Yinghai Lu <yinghai@...nel.org>, Pavel Machek <pavel@....cz>, 
	"the arch/x86 maintainers" <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>, 
	Linux PM list <linux-pm@...r.kernel.org>, kernel-hardening@...ts.openwall.com
Subject: Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier <thgarnie@...gle.com> wrote:
> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki <rjw@...ysocki.net> wrote:
>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>>> variable changed during boot. The assembly code was using the variable
>>> as an immediate value to calculate the cr3 physical address. The
>>> physical address was incorrect resulting to a GP fault.
>>>
>>> Signed-off-by: Thomas Garnier <thgarnie@...gle.com>
>>> ---
>>>  arch/x86/power/hibernate_asm_64.S | 12 +++++++++++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S
>>> index 8eee0e9..8db4905 100644
>>> --- a/arch/x86/power/hibernate_asm_64.S
>>> +++ b/arch/x86/power/hibernate_asm_64.S
>>> @@ -23,6 +23,16 @@
>>>  #include <asm/processor-flags.h>
>>>  #include <asm/frame.h>
>>>
>>> +/*
>>> + * A global variable holds the page_offset when KASLR memory randomization
>>> + * is enabled.
>>> + */
>>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>>> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
>>> +#else
>>> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
>>> +#endif
>>> +
>>>  ENTRY(swsusp_arch_suspend)
>>>       movq    $saved_context, %rax
>>>       movq    %rsp, pt_regs_sp(%rax)
>>> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>>>       /* code below has been relocated to a safe page */
>>>  ENTRY(core_restore_code)
>>>       /* switch to temporary page tables */
>>> -     movq    $__PAGE_OFFSET, %rcx
>>> +     movq    __PAGE_OFFSET_REF, %rcx
>>>       subq    %rcx, %rax
>>>       movq    %rax, %cr3
>>>       /* flush TLB */
>>>
>>
>> I'm not particularly liking the #ifdefs and they won't be really
>> necessary if the subtraction is carried out by the C code IMO.
>>
>> What about the patch below instead?
>>
>
> Yes, I think that's a good idea. I will test it and send PATCH v2.

No need to send this patch again.  Please just let me know if it works
for you. :-)

Thanks,
Rafael

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.