Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 9 Sep 2011 07:42:47 -0500
From: "JimF" <jfoug@....net>
To: <john-dev@...ts.openwall.com>
Subject: Re: New pkzip format

From: "Solar Designer" <solar@...nwall.com>
> On Thu, Sep 08, 2011 at 04:41:42PM -0500, jfoug wrote:
> ...
>> I know Alex wants to inflate the benchmarks sometimes, ...
>
> What are you referring to here?

What is being benchmark, is a pretty unrealistic 'raw' speed of the crypt 
function. However, that is pretty much how any crypting software will bench 
itself.  You hear of '3.6 GB / s throughput' for some crypting function, 
similar to what we see 27000K/s for some formats.  This is simply stating 
the raw speed.  There is no way to take crypt algorithm X, and ever 10.8 GB 
of data in 3s (3*3.6), even though it is rated at that speed.  This is 
simply due to additional overhead that has to be in there.   Now, in the 
algorithm was a black box on the wire, totally non affected by any IO, 
simply working it's wonder on the bytes flying through, then it is likely 
possible that the claimed speed could be met, if the pipe was large enough 
to support this.

However, in comparison of password packages, I think they are all doing 
pretty much the same thing. Trying to cut out any overhead, and simply 
testing the raw encryption speed, spinning over the cryptor.

My usage of 'inflate' may have been the wrong word chosen, and I am not 
quite sure what the right one is.  My thought behind that statement was that 
the benchmark number shown, often is considerably higher than the real world 
experiences actually running john.   At the very least, I would think we 
could provide a VERY fast way to serve up passwords, but not the same memory 
buffer or 2 over and over again, and call the crypt followed by a call to 
cmp_all, so that it actually is a little closer to real world speeds.  I 
know you are trying to keep the speeds comparable with both prior versions 
of john, and with external competing packages.  If we all of a sudden 
release a version of john that is 10% slower in benchmarks for several 
formats, then users would notice that right off, and not be happy about 
using that version, even if the true speed of the format has not changed at 
all.  It is all in perception.

> [alex] from a prior email.
>  (I haven't considered your specific proposed changes yet.)

I have no specific proposals.  What I did was a very quick/dirty hack. I was 
more interested in seeing 'why' pkzip -test was so far below, and then to 
run john in a blind test to see if there were other formats which also 
obviously being underreported by the current method of bench.   It is NOT a 
proposed change, just a demonstration.  It may be possible to simply keep 
the current behavior, and 'make it work' for the formats having problems 
with proper passwords, by smashing the correct passwords owned by the 
format, prior to starting the bench loop.  If done that way, there should be 
no impact at all on the existing formats, unless testing bad passwords takes 
longer, but then we really SHOULD know that, to try and correct that, 
because wrong passwords is the normal thing the crypt should get, and 
correct passwords is actually the anomaly. However, for the 3 formats which 
take a MUCH longer time processing correct passwords, they now would not be 
getting the right passwords, so their true throughput values would be shown.

Jim.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.