Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 21 Jul 2011 11:21:07 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Solar Designer <solar@...nwall.com>
Cc: NeilBrown <neilb@...e.de>, Stephen Smalley <sds@...ho.nsa.gov>,
        Vasiliy Kulikov <segoon@...nwall.com>,
        kernel-hardening@...ts.openwall.com, James Morris <jmorris@...ei.org>,
        linux-kernel@...r.kernel.org, Greg Kroah-Hartman <gregkh@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "David S. Miller" <davem@...emloft.net>, Jiri Slaby <jslaby@...e.cz>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        linux-fsdevel@...r.kernel.org,
        KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
        Eric Paris <eparis@...hat.com>, Willy Tarreau <w@....eu>,
        Sebastian Krahmer <krahmer@...e.de>
Subject: Re: Re: [PATCH] move RLIMIT_NPROC check from
 set_user() to do_execve_common()

On Thu, Jul 21, 2011 at 5:48 AM, Solar Designer <solar@...nwall.com> wrote:
>
> Maybe, and if so I think that one I proposed above falls in this
> category as well, but it closes more vulnerabilities (and/or does so
> more fully).

I think we could have a pretty simple approach that "works in
practice": retain the check at setuid() time, but make it a higher
limit.

IOW, the logic is that we have two competing pressures:

 (a) we should try to avoid failing on setuid(), because there is a
real risk that the setuid caller doesn't really check the failure case
and opens itself up for a security problem

and

 (b) never failing setuid at all is in itself a security problem,
since it can lead to DoS attacks in the form of excessive resource use
by one user.

But the sane (intelligent) solution to that is to say that we *PREFER*
to fail in execve(), but that at some point a failure in setuid() is
preferable to no failure at all. After all, we have no hard knowledge
that there is any actual setuid() issue. Neither generally does the
user (iow, look at this whole discussion where intelligent people
simply have different inputs depending on "what could happen").

So it really seems like the natural approach would be to simply fail
*earlier* on execve() and fork(). That will catch most cases, and has
no downsides. But if we notice that we are in a situation where some
privileged user can be tricked into forking a lot and doing setuid(),
then at that point the setuid() path becomes relevant.

IOW, I'd suggest simply making the rule be that "setuid() allows 10%
more users than the limit technically says". It's not a guarantee, but
it means that in order to hit the problem, you need to have *both* a
setuid application that allows unconstrained user forking *and*
doesn't check the setuid() return value.

Put another way: a user cannot force the "we're at the edge of the
setuid() limit" on its own by just forking - the user will be stopped
10% before the setuid() failure case can ever trigger.

Is this some "guarantee of nothing bad can ever happen"? No. If you
have bad setuid applications, you will have problems. But it's a "you
need to really work harder at it and you need to find more things to
go wrong", which is after all what real security is all about.

No?

                Linus

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.