Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 14 Feb 2019 12:57:30 -0700
From: Khalid Aziz <khalid.aziz@...cle.com>
To: Dave Hansen <dave.hansen@...el.com>, juergh@...il.com, tycho@...ho.ws,
        jsteckli@...zon.de, ak@...ux.intel.com, torvalds@...ux-foundation.org,
        liran.alon@...cle.com, keescook@...gle.com, akpm@...ux-foundation.org,
        mhocko@...e.com, catalin.marinas@....com, will.deacon@....com,
        jmorris@...ei.org, konrad.wilk@...cle.com
Cc: deepa.srinivasan@...cle.com, chris.hyser@...cle.com, tyhicks@...onical.com,
        dwmw@...zon.co.uk, andrew.cooper3@...rix.com, jcm@...hat.com,
        boris.ostrovsky@...cle.com, kanth.ghatraju@...cle.com,
        joao.m.martins@...cle.com, jmattson@...gle.com,
        pradeep.vincent@...cle.com, john.haxby@...cle.com, tglx@...utronix.de,
        kirill.shutemov@...ux.intel.com, hch@....de, steven.sistare@...cle.com,
        labbott@...hat.com, luto@...nel.org, peterz@...radead.org,
        kernel-hardening@...ts.openwall.com, linux-mm@...ck.org,
        x86@...nel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v8 13/14] xpfo, mm: Defer TLB flushes for non-current
 CPUs (x86 only)

On 2/14/19 10:42 AM, Dave Hansen wrote:
>>  #endif
>> +
>> +	/* If there is a pending TLB flush for this CPU due to XPFO
>> +	 * flush, do it now.
>> +	 */
> 
> Don't forget CodingStyle in all this, please.

Of course. I will fix that.

> 
>> +	if (cpumask_test_and_clear_cpu(cpu, &pending_xpfo_flush)) {
>> +		count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
>> +		__flush_tlb_all();
>> +	}
> 
> This seems to exist in parallel with all of the cpu_tlbstate
> infrastructure.  Shouldn't it go in there?

That sounds like a good idea. On the other hand, pending flush needs to
be kept track of entirely within arch/x86/mm/tlb.c and using a local
variable with scope limited to just that file feels like a lighter
weight implementation. I could go either way.

> 
> Also, if we're doing full flushes like this, it seems a bit wasteful to
> then go and do later things like invalidate_user_asid() when we *know*
> that the asid would have been flushed by this operation.  I'm pretty
> sure this isn't the only __flush_tlb_all() callsite that does this, so
> it's not really criticism of this patch specifically.  It's more of a
> structural issue.
> 
> 

That is a good point. It is not just wasteful, it is bound to have
performance impact even if slight.

>> +void xpfo_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>> +{
> 
> This is a bit lightly commented.  Please give this some good
> descriptions about the logic behind the implementation and the tradeoffs
> that are in play.
> 
> This is doing a local flush, but deferring the flushes on all other
> processors, right?  Can you explain the logic behind that in a comment
> here, please?  This also has to be called with preemption disabled, right?
> 
>> +	struct cpumask tmp_mask;
>> +
>> +	/* Balance as user space task's flush, a bit conservative */
>> +	if (end == TLB_FLUSH_ALL ||
>> +	    (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) {
>> +		do_flush_tlb_all(NULL);
>> +	} else {
>> +		struct flush_tlb_info info;
>> +
>> +		info.start = start;
>> +		info.end = end;
>> +		do_kernel_range_flush(&info);
>> +	}
>> +	cpumask_setall(&tmp_mask);
>> +	cpumask_clear_cpu(smp_processor_id(), &tmp_mask);
>> +	cpumask_or(&pending_xpfo_flush, &pending_xpfo_flush, &tmp_mask);
>> +}
> 
> Fun.  cpumask_setall() is non-atomic while cpumask_clear_cpu() and
> cpumask_or() *are* atomic.  The cpumask_clear_cpu() is operating on
> thread-local storage and doesn't need to be atomic.  Please make it
> __cpumask_clear_cpu().
> 

I will fix that. Thanks!

--
Khalid

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.