Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 21 Nov 2018 18:34:42 +0200
From: Igor Stoppa <>
To: Andy Lutomirski <>, Igor Stoppa <>
Cc: Nadav Amit <>, Kees Cook <>,
 Peter Zijlstra <>, Mimi Zohar
 <>, Matthew Wilcox <>,
 Dave Chinner <>, James Morris <>,
 Michal Hocko <>,
 Kernel Hardening <>,
 linux-integrity <>,
 LSM List <>,
 Dave Hansen <>, Jonathan Corbet <>,
 Laura Abbott <>, Randy Dunlap <>,
 Mike Rapoport <>,
 "open list:DOCUMENTATION" <>,
 LKML <>, Thomas Gleixner <>
Subject: Re: [PATCH 10/17] prmem: documentation


On 13/11/2018 20:36, Andy Lutomirski wrote:
> On Tue, Nov 13, 2018 at 10:33 AM Igor Stoppa <> wrote:
>> I forgot one sentence :-(
>> On 13/11/2018 20:31, Igor Stoppa wrote:
>>> On 13/11/2018 19:47, Andy Lutomirski wrote:
>>>> For general rare-writish stuff, I don't think we want IRQs running
>>>> with them mapped anywhere for write.  For AVC and IMA, I'm less sure.
>>> Why would these be less sensitive?
>>> But I see a big difference between my initial implementation and this one.
>>> In my case, by using a shared mapping, visible to all cores, freezing
>>> the core that is performing the write would have exposed the writable
>>> mapping to a potential attack run from another core.
>>> If the mapping is private to the core performing the write, even if it
>>> is frozen, it's much harder to figure out what it had mapped and where,
>>> from another core.
>>> To access that mapping, the attack should be performed from the ISR, I
>>> think.
>> Unless the secondary mapping is also available to other cores, through
>> the shared mm_struct ?
> I don't think this matters much.  The other cores will only be able to
> use that mapping when they're doing a rare write.

I'm still mulling over this.
There might be other reasons for replicating the mm_struct.

If I understand correctly how the text patching works, it happens 
sequentially, because of the text_mutex used by arch_jump_label_transform

Which might be fine for this specific case, but I think I shouldn't 
introduce a global mutex, when it comes to data.
Most likely, if two or more cores want to perform a write rare 
operation, there is no correlation between them, they could proceed in 
parallel. And if there really is, then the user of the API should 
introduce own locking, for that specific case.

A bit unrelated question, related to text patching: I see that each 
patching operation is validated, but wouldn't it be more robust to first 
validate  all of then, and only after they are all found to be 
compliant, to proceed with the actual modifications?

And about the actual implementation of the write rare for the statically 
allocated variables, is it expected that I use Nadav's function?
Or that I refactor the code?

The name, referring to text would definitely not be ok for data.
And I would have to also generalize it, to deal with larger amounts of data.

I would find it easier, as first cut, to replicate its behavior and 
refactor only later, once it has stabilized and possibly Nadav's patches 
have been acked.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.