Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 19 Apr 2012 14:22:08 +0200
From: Frank Dittrich <frank_dittrich@...mail.com>
To: john-users@...ts.openwall.com
Subject: Re: automation equipped working place of hash cracker,
 proposal

On 04/19/2012 01:32 PM, Simon Marechal wrote:
> On 19/04/2012 12:53, Aleksey Cherepanov wrote:
>> I think the most effective compression for candidates is to make john.ocnf,
>> john.rec and some kind of value for amount to stop after. So we run john
>> --stdout on the server, write down all information we need to produce
>> appropriate .rec files and then distribute files to the nodes. Or even without
>> --stdout: we just produce needed .rec files. I do not know what is exactly
>> stored in .rec file so I do not know how easy what I say about is. But it
>> seems to be real, does not it?
> 
> .rec stores jtr state when it stopped, so that it can be resumed. I
> believe you might only need this with incremental mode, as single and
> wordlist modes (with a reasonable quantity of rules) are quick enough to
> be considered a single "job", and Markov mode was made to be
> distributed. Wordlist modes could be effectively distributed by just
> sending a few rules to each client.

For slow hashes and larger word lists, even this might take a lot of time.
So if this task turns out to be less effective as hoped and we have
other possible tasks which are more likely to crack passwords, it might
be worthwhile to interrupt this task, keep the .rec file for later reuse
when the overall success rate has decreased anyway, and let the client
work on something else instead.
In order to let the server decide which tasks to interrupt, may be it is
necessary to provide intermediate status reports about c/s rate and
cracked passwords, say, every 10 minutes.

> The problem of generating the right .rec file without resorting to the
> trick you mention (breaking after a certain quantity of passwords and
> keeping the corresponding .rec) is probably not trivial. However going
> with this trick will imply high CPU usage on the server, and finding a
> way to stop the clients after they processed their share of the work.

For the default chr files, this could be done prior to the contest.
The problem is, we'd probably like to create new chr files during the
contest (or during a real-life cracking session).
Another approach could be to analyze the incremental mode algorithm,
find out which sequence of new steps (switching to another password
length, switching to a new character count at a certain position) would
be generated based on a given chr file, and then generate .rec files for
these intermediate steps.
Because immediately after switching to a new "level" the success rate
usually increases for a short time, this strategy could turn out to be
useful.
But we need to implement some way to let he client know that the
cracking attempt should be stopped when a certain new level is reached,
to avoid duplicate effort.
(At the beginning, when each level is processed very fast, a client
should process several levels in a single task. Later on, just a single
level per task will be enough.)
If the incremental mode session gets less productive than other tasks,
it could also be interrupted earlier (and the rec file saved for later
reuse).
The information when to stop would have to be stored in the .rec file as
well, so we'd also need to adjust the .rec file layout for incremental mode.

Frank

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.