Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 19 Jan 2018 09:42:59 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Dan Williams <dan.j.williams@...el.com>, linux-kernel@...r.kernel.org
Cc: linux-arch@...r.kernel.org, kernel-hardening@...ts.openwall.com,
 Andrew Honig <ahonig@...gle.com>, stable@...r.kernel.org,
 gregkh@...uxfoundation.org, tglx@...utronix.de, alan@...ux.intel.com,
 torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
 Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH v4 09/10] kvm, x86: fix spectre-v1 mitigation

On 19/01/2018 01:02, Dan Williams wrote:
> Commit 75f139aaf896 "KVM: x86: Add memory barrier on vmcs field lookup"
> added a raw 'asm("lfence");' to prevent a bounds check bypass of
> 'vmcs_field_to_offset_table'. This does not work for some AMD cpus, see
> the 'ifence' helper,

The code never runs on AMD cpus (it's for Intel virtualization
extensions), so it'd be nice if you could fix up the commit message.

Apart from this, obviously

Acked-by: Paolo Bonzini <pbonzini@...hat.com>

Thanks!

Paolo

> and it otherwise does not use the common
> 'array_ptr' helper designed for these types of fixes. Convert this to
> use 'array_ptr'.
> 
> Cc: Andrew Honig <ahonig@...gle.com>
> Cc: Jim Mattson <jmattson@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> ---
>  arch/x86/kvm/vmx.c |   19 +++++++------------
>  1 file changed, 7 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index c829d89e2e63..20b9b0b5e336 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -34,6 +34,7 @@
>  #include <linux/tboot.h>
>  #include <linux/hrtimer.h>
>  #include <linux/frame.h>
> +#include <linux/nospec.h>
>  #include "kvm_cache_regs.h"
>  #include "x86.h"
>  
> @@ -898,21 +899,15 @@ static const unsigned short vmcs_field_to_offset_table[] = {
>  
>  static inline short vmcs_field_to_offset(unsigned long field)
>  {
> -	BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
> -
> -	if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
> -		return -ENOENT;
> +	const unsigned short *offset;
>  
> -	/*
> -	 * FIXME: Mitigation for CVE-2017-5753.  To be replaced with a
> -	 * generic mechanism.
> -	 */
> -	asm("lfence");
> +	BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
>  
> -	if (vmcs_field_to_offset_table[field] == 0)
> +	offset = array_ptr(vmcs_field_to_offset_table, field,
> +			ARRAY_SIZE(vmcs_field_to_offset_table));
> +	if (!offset || *offset == 0)
>  		return -ENOENT;
> -
> -	return vmcs_field_to_offset_table[field];
> +	return *offset;
>  }
>  
>  static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
> 

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.