Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 18 Sep 2012 09:51:11 +0200
From: Frank Dittrich <>
Subject: Re: Static analysis of John using Coverity

On 09/16/2012 11:23 PM, Alexander Cherepanov wrote:
> On 2012-09-15 21:47, Robert B. Harris wrote:
>> There are other analyzers as well... Coverity is supposed to have a low
>> false positive rate, so I think that might be a good program to start with
> There are several free (free as in freedom) static analyzers available
> so starting with a non-free solution is kinda strange IMHO. There are
> clang analyzer, cppcheck and others:
> .
> (Unfortunately free analyzers are mixed with non-free.) There are free
> dynamic analyzers also.
> But there are some things to do which were already posted to this
> mailing list. Look for example at the thread started at
> And I suspect that
> every format with trivial valid() -- there are ~40-50 of them --  have
> buffer overflows in get_salt and/or similar functions. You don't need a
> code analyzer to find them.

I agree that we should fix known bugs / look at those functions we
already know to  be error-prone first. Unless we fix all the existing
valid() and prepare() functions to properly check the input, there's a
higher risk that new formats will not have proper checks.
The problem that it is probably more fun to add new functionality
instead of fixing bugs/cleaning up existing code will not be solved by
using code analyzers.

> There is also PR-angle in using Coverity. IIUC if you use it then the
> number of bugs in the project will be displayed on their site. If we
> know that the code is bad than IMHO it's better to fix it before
> submitting to Coverity.

I agree with this. Coverity will list the initial number of bugs found
(per lines of code), and also the current number of bugs...
If we just analyze the current code, john (at least the jumbo edition)
will look much worse compared to other software than it should if we fix
known bugs first.
There also might be issues with false positives, which will require
additional effort without actual benefit (except "improving" the statistics.
And, of course, if we let coverity analyze the code without fixing known
bugs first, it will look like all these bugs have only been found with
the help of coverity (and would otherwise remain undetected), which is
not true.
Even worse would be if nobody is interested in fixing all the bugs
discovered by coverity (or dealing with the false positives).

> But if using Coverity will stimulate fixing the code which nobody wants
> to fix now then it's probably a good thing;-)

IMHO we should first try to come up with a list of required changes
(code review for valid() and prepare() functions, unifying the
0-9A-F(a-f) tests, fixing bugs discovered by free static analyzers,
different compilers, valgrind, etc.
If we don't manage to fix these issues in a reasonable time, I doubt the
situation will be different when we use coverity instead.


Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ