Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLU0-SMTP149ABD34B99E248E5CA7D12FDFD0@phx.gbl>
Date: Thu, 21 Jun 2012 11:11:07 +0200
From: Frank Dittrich <frank_dittrich@...mail.com>
To: john-dev@...ts.openwall.com
Subject: Re: formats interface enhancements

On 06/20/2012 06:33 PM, Solar Designer wrote:
> ability to exclude individual test vectors from benchmarks

Even better would be if we can benchmark different john versions with
various test vectors, not just the hard coded test vectors that are
compiled into a particular version.

Then, it could even be possible to apply that patch to older john
versions, and test the performance of those older john versions with new
test vectors.

May be without additional parameters, --test just uses the test vectors
compiled into john.

But

./john --test[=SECONDS] [--format=name] benchmark-test-data

expects filename to contain test data to be used for the benchmark.

The format expected for filename needs to be discussed.
It should allow test vectors for different formats to be included, where
each format just skips those test vectors intended for other formats.
May be we even need some way to set the ratio of incorrect candidate
passwords per correct candidate password to be used in the benchmark.

Each new (jumbo or official) release could provide a default
benchmark-test-data file.
But it would be possible to benchmark a new release with the
benchmark-test-data file of a previous release.

For formats that got / get renamed from one release to another, we
either need to include the mappings of old to new format names somewhere
in the benchmark-test-data file, in a separate file (may be configurable
in the [Options] section.

If we want to make benchmarking easier, we could also add a new
--benchmark option which behaves exactly as --test, except that it
doesn't use the hard coded test data for benchmarking, but requires a
benchmark-test-data file, or defaults to using the file specified in a
john.conf [Options] variable.

I don't know yet how hard it would be to parse the hard coded test data
in the format specific source code and convert it into a
benchmark-test-data file.
(First we need to specify the format of such a file.)
But may be that wouldn't even be a good idea, because for correctness
tests, you'd prefer to include all sorts of corner cases, but you'd
usually use non-default iteration counts, while for a benchmark, you'd
want typical iteration counts, salt length, passwords.

Frank

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.