Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 04 May 2018 07:42:52 +0000
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Matthew Wilcox <mawilcox@...rosoft.com>, 
	linux-mm <linux-mm@...ck.org>, Kees Cook <keescook@...omium.org>, 
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, 
	Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [PATCH 2/2] mm: Add kvmalloc_ab_c and kvzalloc_struct

On Wed, Feb 14, 2018 at 8:27 AM Matthew Wilcox <willy@...radead.org> wrote:
> +static inline __must_check
> +void *kvmalloc_ab_c(size_t n, size_t size, size_t c, gfp_t gfp)
> +{
> +       if (size != 0 && n > (SIZE_MAX - c) / size)
> +               return NULL;
> +
> +       return kvmalloc(n * size + c, gfp);

Ok, so some more bikeshedding:

  - I really don't want to encourage people to use kvmalloc().

In fact, the conversion I saw was buggy. You can *not* convert a GFP_ATOMIC
user of kmalloc() to use kvmalloc.

  - that divide is really really expensive on many architectures.

Note that these issues kind of go hand in hand.

Normal kernel allocations are *limited*. It's simply not ok to allocate
megabytes (or gigabytes) of mmory in general. We have serious limits, and
we *should* have serious limits. If people worry about the multiply
overflowing because a user is controlling the size valus, then dammit, such
a user should not be able to do a huge gigabyte vmalloc() that exhausts
memory and then spends time clearing it all!

So the whole notion that "overflows are dangerous, let's get rid of them"
somehow fixes a bug is BULLSHIT. You literally introduced a *new* bug by
removing the normal kmalloc() size limit because you thought that pverflows
are the only problem.

So stop doing things like this. We should not do a stupid divide, because
we damn well know that it is NOT VALID to allocate arrays that have
hundreds of fthousands of elements,  or where the size of one element is
very big.

So get rid of the stupid divide, and make the limits be much stricter. Like
saying "the array element size had better be smaller than one page"
(because honestly, bigger elements are not valid in the kernel), and "the
size of the array cannot be more than "pick-some-number-out-of-your-ass".

So just make the divide go the hell away, a and check the size for validity.

Something like

      if (size > PAGE_SIZE)
           return NULL;
      if (elem > 65535)
           return NULL;
      if (offset > PAGE_SIZE)
           return NULL;
      return kzalloc(size*elem+offset);

and now you (a) guarantee it can't overflow and (b) don't make people use
crazy vmalloc() allocations when they damn well shouldn't.

And yeah, if  somebody has a page size bigger than 64k, then the above can
still overflow. I'm sorry, that architecture s broken shit.

Are there  cases where vmalloc() is ok? Yes. But they should be rare, and
they should have a good reason for them. And honestly, even then the above
limits really really sound quite reasonable. There is no excuse for
million-entry arrays in the kernel. You are doing something seriously wrong
if you do those.

             Linus

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.