Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 12 Oct 2018 01:43:21 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Kristen C Accardi <kristen@...ux.intel.com>
cc: Kees Cook <keescook@...omium.org>, Andy Lutomirski <luto@...nel.org>, 
    Kernel Hardening <kernel-hardening@...ts.openwall.com>, 
    Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, 
    "H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>, 
    LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86: entry: flush the cache if syscall error

On Thu, 11 Oct 2018, Kristen C Accardi wrote:
> On Thu, 2018-10-11 at 13:55 -0700, Kees Cook wrote:
> > I think this looks like a good idea. It might be worth adding a
> > comment about the checks to explain why those errors are whitelisted.
> > It's a cheap and effective mitigation for "unknown future problems"
> > that doesn't degrade normal workloads.
> > 
> > > > +                       return;
> > > > +
> > > > +               wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
> > 
> > What about CPUs without FLUSH_L1D? Could it be done manually with a
> > memcpy or something?
> 
> It could - my original implementation (pre l1d_flush msr) did, but it
> did come with some additional cost in that I allocated per-cpu memory
> to keep a 32K buffer around that I could memcpy. It also sacrificed
> completeness for simplicity by not taking into account cases where L1
> was not 32K. As far as I know this msr is pretty widely deployed, even
> on older hardware.

You don't need that per cpu thing, really. We have both the MSR flush and
the software fallback in KVM vmx_l1d_flush(). The performance impact is
pretty much the same between the MSR flush and the software fallback.

Vs. deployment of the MSR. I'm not sure how wide this is supported on older
CPUs as there have been limitations on microcode patch space, but I can't
find the support matrix right now. Thanks Intel for reshuffling the webpage
every other week!

Thanks,

	tglx


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.