Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190617184536.GB11017@char.us.oracle.com>
Date: Mon, 17 Jun 2019 14:45:36 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Nadav Amit <nadav.amit@...il.com>, Andy Lutomirski <luto@...nel.org>,
        Alexander Graf <graf@...zon.com>, Thomas Gleixner <tglx@...utronix.de>,
        Marius Hillenbrand <mhillenb@...zon.de>,
        kvm list <kvm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
        Kernel Hardening <kernel-hardening@...ts.openwall.com>,
        Linux-MM <linux-mm@...ck.org>, Alexander Graf <graf@...zon.de>,
        David Woodhouse <dwmw@...zon.co.uk>,
        the arch/x86 maintainers <x86@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC 00/10] Process-local memory allocations for hiding KVM
 secrets

On Mon, Jun 17, 2019 at 11:07:45AM -0700, Dave Hansen wrote:
> On 6/17/19 9:53 AM, Nadav Amit wrote:
> >>> For anyone following along at home, I'm going to go off into crazy
> >>> per-cpu-pgds speculation mode now...  Feel free to stop reading now. :)
> >>>
> >>> But, I was thinking we could get away with not doing this on _every_
> >>> context switch at least.  For instance, couldn't 'struct tlb_context'
> >>> have PGD pointer (or two with PTI) in addition to the TLB info?  That
> >>> way we only do the copying when we change the context.  Or does that tie
> >>> the implementation up too much with PCIDs?
> >> Hmm, that seems entirely reasonable.  I think the nasty bit would be
> >> figuring out all the interactions with PV TLB flushing.  PV TLB
> >> flushes already don't play so well with PCID tracking, and this will
> >> make it worse.  We probably need to rewrite all that code regardless.
> > How is PCID (as you implemented) related to TLB flushing of kernel (not
> > user) PTEs? These kernel PTEs would be global, so they would be invalidated
> > from all the address-spaces using INVLPG, I presume. No?
> 
> The idea is that you have a per-cpu address space.  Certain kernel
> virtual addresses would map to different physical address based on where
> you are running.  Each of the physical addresses would be "owned" by a
> single CPU and would, by convention, never use a PGD that mapped an
> address unless that CPU that "owned" it.
> 
> In that case, you never really invalidate those addresses.

But you would need to invalidate if the process moved to another CPU, correct?

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.