Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [day] [month] [year] [list]
Date: Sat, 13 Jul 2013 19:55:29 -0400
From: cc <cc@...heads.com>
To: john-dev@...ts.openwall.com
Cc: Tavis Ormandy <taviso@...xchg8b.com>
Subject: Re: --fork

On Tue, May 7, 2013 at 4:26 PM, Solar Designer <solar@...nwall.com> wrote:

> Hi Tavis,
>
> On Tue, May 07, 2013 at 08:29:48AM -0700, Tavis Ormandy wrote:
>
> > I have an algorithm to skip to an arbitrary state in constant time
> without
> > having to increment it, would you be interested in a patch against core?
>
> I do remember your work on this, thanks!  However, your algorithm is
> probably not usable for the new incremental mode as-is (with
> per-position character counts growing separately from each other) - it
> will need slight changes.  I actually wanted to look into doing that for
> 1.8, but then decided to leave that for post-1.8 to avoid scope creep
> for 1.8 (it's time to release 1.8 already!)
>

This would be very interesting, and required for large numbers of nodes
where the --node (skip) implementation doesn't work.  I'd definitely love
to see a current patch against the current codebase.

I haven't spent any time with JtR in about a year, and was hoping that
'--node --incremental' could help me divide an exhaustive incremental
search across many units of work.   Last summer I had used some of Tavis's
concepts and was relatively successful.  ('clortho' on the contrib page
resembles this work)

On the plus side, --node distribution covers the entire incremental mode
guess-space without duplicates (unlike markov):

(using 'All4' for time)

$ ./john --incremental=ALL4 --stdout 2>/dev/null |wc -l
82317121
$ for i in $(seq 1 1000) ; do ./john --incremental=ALL4 --node=${i}/1000
--stdout 2>/dev/null |wc -l; done |paste -sd+ |bc
82317121


However, the number of guesses for each node run tend to be quite uneven.
 Here's the first 15 counts per node in that example:

$ for i in $(seq 1 15) ; do ./john --incremental=ALL4 --node=${i}/1000
--stdout 2>/dev/null |wc -l; done |paste -sd+
20496+7056+0+884+0+0+0+18144+107310+15876+7225+8245+15876+138350+7396


If I put in parameters closer to what I'd like to do in the real world:
$ for i in $(seq 1 15) ; do ./john --incremental=ASCII --node=${i}/100000
--stdout 2>/dev/null |wc -l; done |paste -sd+
1+0+0+0+0+0+0+0+0+1+1+0+2+1+4

The 100k units are much more uneven - it seems a great number of nodes will
finish in under 100ms on an average modern core, but certain nodes end up
taking a very long time to complete.

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.