Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 8 May 2012 12:52:30 +0400
From: Aleksey Cherepanov <aleksey.4erepanov@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: automation equipped working place of hash cracker,
 proposal

On Thu, Apr 19, 2012 at 02:22:08PM +0200, Frank Dittrich wrote:
> On 04/19/2012 01:32 PM, Simon Marechal wrote:
> > On 19/04/2012 12:53, Aleksey Cherepanov wrote:
> >> I think the most effective compression for candidates is to make john.ocnf,
> >> john.rec and some kind of value for amount to stop after. So we run john
> >> --stdout on the server, write down all information we need to produce
> >> appropriate .rec files and then distribute files to the nodes. Or even without
> >> --stdout: we just produce needed .rec files. I do not know what is exactly
> >> stored in .rec file so I do not know how easy what I say about is. But it
> >> seems to be real, does not it?
> > 
> > .rec stores jtr state when it stopped, so that it can be resumed. I
> > believe you might only need this with incremental mode, as single and
> > wordlist modes (with a reasonable quantity of rules) are quick enough to
> > be considered a single "job", and Markov mode was made to be
> > distributed. Wordlist modes could be effectively distributed by just
> > sending a few rules to each client.
> 
> For slow hashes and larger word lists, even this might take a lot of time.
> So if this task turns out to be less effective as hoped and we have
> other possible tasks which are more likely to crack passwords, it might
> be worthwhile to interrupt this task, keep the .rec file for later reuse
> when the overall success rate has decreased anyway, and let the client
> work on something else instead.
> In order to let the server decide which tasks to interrupt, may be it is
> necessary to provide intermediate status reports about c/s rate and
> cracked passwords, say, every 10 minutes.
> 
> > The problem of generating the right .rec file without resorting to the
> > trick you mention (breaking after a certain quantity of passwords and
> > keeping the corresponding .rec) is probably not trivial. However going
> > with this trick will imply high CPU usage on the server, and finding a
> > way to stop the clients after they processed their share of the work.
> 
> For the default chr files, this could be done prior to the contest.
> The problem is, we'd probably like to create new chr files during the
> contest (or during a real-life cracking session).
> Another approach could be to analyze the incremental mode algorithm,
> find out which sequence of new steps (switching to another password
> length, switching to a new character count at a certain position) would
> be generated based on a given chr file, and then generate .rec files for
> these intermediate steps.

Alexander suggested to not waste our time on generation .rec files
because it does seem perspective:
"I don't mean to say that this can't work.  It can.  But it is a poor
approach to the problem, in my opinion - at least given what the .rec
files currently are, and their deliberately undocumented and
subject-to-change-without-notice nature."

Regards,
Aleksey Cherepanov

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.