Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 21 Jul 2011 23:00:31 +0400
From: Solar Designer <solar@...nwall.com>
To: musl@...ts.openwall.com
Subject: Re: some fixes to musl

Rich,

On Thu, Jul 21, 2011 at 02:21:01PM -0400, Rich Felker wrote:
> [...] I like to avoid
> tinfoil-hat programming -- that is, testing for error conditions from
> interfaces that have no standard reason to fail when used in the way
> they're being used, and for which it would be a very poor
> quality-of-implementation issue for a kernel implementation to add new
> implementation-specific failure conditions.

Yet such things will and do happen, and the kernel developers find good
reasons to justify those additional failure conditions.  In many cases,
it is difficult to even figure out whether a certain syscall can fail
and with what specific error codes - those come from far deeper layers,
sometimes even from third-party modules.  Maybe this is wrong or maybe
not, but it's the reality.

So I respectfully disagree with your approach to this.

> I'm aware of course that some interfaces *can* fail for nonstandard
> reasons under Linux (dup2 and set*uid come to mind), and I've tried to
> work around these and shield applications from the brokenness...

In a discussion currently on LKML, everyone appears to agree that it's
the applications not checking the return value that are broken and that
it's unfortunate that the kernel has to include workarounds for such
broken applications - such as deferring setuid() failure to execve().

My current opinion is that _both_ applications (and libraries) not
checking return value from "can't fail" syscalls are broken and the
kernel is broken for introducing new failure conditions to some
syscalls that historically "couldn't fail" and where such ignored
failures would have security consequences.  When there are no security
consequences, I am OK with the kernel introducing new failure
conditions - the applications (and libraries) need to be improved over
time to check the return value, which they should have always been doing
in the first place.

I understand that you want to keep the code small and clean rather than
cluttered with tinfoil-hat return value checks, but I am afraid this
approach may only be acceptable when you make calls to your own
interfaces, where you control the possible failure conditions.  And even
then things may break when you make changes to those interfaces later,
but forget to add the return value checks elsewhere in your code...

Well, maybe you can make exceptions for "can't happen" failures where
the impact of the failure if it does happen is negligible... but then
it is non-obvious where to draw the line.  Is the impact of a failed
close() negligible?  Not necessarily; it could even be a security hole
if the fd gets inherited by an untrusted child process.

Personally, I've been using different approaches to this in different
ones of my programs.  For musl, I think the "always check" approach may
be the better one.  Yes, the code size increase from those error
handling paths is unfortunate...  Some use of goto can make them smaller
and keep them out of the same cache lines with actually running code.

Just an opinion.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.