Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 3 Feb 2018 17:29:22 -0500
From: Boris Lukashev <>
To: Igor Stoppa <>
Cc: Christopher Lameter <>, Matthew Wilcox <>, Jann Horn <>, 
	Jerome Glisse <>, Kees Cook <>, 
	Michal Hocko <>, Laura Abbott <>, 
	Christoph Hellwig <>, 
	linux-security-module <>, Linux-MM <>, 
	kernel list <>, 
	Kernel Hardening <>
Subject: Re: [PATCH 4/6] Protectable Memory

On Sat, Feb 3, 2018 at 3:32 PM, Igor Stoppa <> wrote:
> On 03/02/18 22:12, Boris Lukashev wrote:
>> Regarding the notion of validated protected memory, is there a method
>> by which the resulting checksum could be used in a lookup
>> table/function to resolve the location of the protected data?
> What I have in mind is a checksum at page/vmap_area level, so there
> would be no 1:1 mapping between a specific allocation and the checksum.
> An extreme case would be the one where an allocation crosses one or more
> page boundaries, while the checksum refers to a (partially) overlapping
> memory area.
> Code accessing a pool could perform one (relatively expensive)
> validation. But still something that would require a more sophisticated
> attack, to subvert.
>> Effectively a hash table of protected allocations, with a benefit of
>> dedup since any data matching the same key would be the same data
>> (multiple identical cred structs being pushed around). Should leave
>> the resolver address/csum in recent memory to check against, right?
> I see where you are trying to land, but I do not see how it would work
> without a further intermediate step.
> pmalloc dishes out virtual memory addresses, when called.
> It doesn't know what the user of the allocation will put in it.
> The user, otoh, has the direct address of the memory it got.
> What you are suggesting, if I have understood it correctly, is that,
> when the pool is protected, the addresses already given out, will become
> traps that get resolved through a lookup table that is built based on
> the content of each allocation.
> That seems to generate a lot of overhead, not to mention the fact that
> it might not play very well with the MMU.

That is effectively what i'm suggesting - as a form of protection for
consumers against direct reads of data which may have been corrupted
by some irrelevant means. In the context of pmalloc, it would probably
be a separate type of ro+verified pool which consumers would
explicitly opt into. Say there's a maintenance cycle on a <name some
scary thing controlled by Linux> and it wants to make sure that the
instructions it read in are what they should have been before running
them, those consumers might well take the penalty if it keeps <said
scary big thing> from doing <the thing we're scared of it doing>.
If such a resolver could be implemented in a manner which doesnt break
all the things (including acceptable performance for at least a
significant number of workloads), it might be useful as a general tool
for handing out memory to userspace, even in rw, as it provides
execution context in which other requirements can be forcibly
resolved, preventing unauthorized access to pages the consumer
shouldn't get in a very generic way. Spectre comes to mind as a
potential class of issues to be addressed this way, since speculative
load could be prevented if the resolution were to fail.

> If I misunderstood, then I'd need a step by step description of what
> happens, because it's not clear to me how else the data would be
> accessed if not through the address that was obtained when pmalloc was
> invoked.
> --
> igor

Boris Lukashev
Systems Architect
Semper Victus

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.