Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 30 May 2006 02:28:08 +0200
From: "Otheus (aka Timothy J. Shelling)" <>
Subject: Re: Parallalizing John the Ripper

> can share them with you now.  I don't think this 2004 hack is any good.
> According to the data in their report, it did not perform too well -
> with a single node doing 300k c/s (no MPI), 13 nodes were doing only
> 1.5M c/s, not the expected 4M c/s.

Yeah, if you could email me that patch, it would help a lot i  think. At
least I would have something to compare it to.

>     o  Recovery: Each task runs with its own session; the whole thing can
> > be re-started in recovery mode
> OK, that can work - but does it mean that there's a .rec file for each
> task?

Yes.  With more work, I could set it up so the root task keeps track of one
record file, but I'd need to understand your restore format/logic very very

II still have some bugs to hammer out here. I just restored a job that I
interrupted last night.  Apparently, at least two of the MPI-john tasks are
using the same keyspace. (For most of the keys cracked during the restore,
there are two lines of output.) ...

> It simply relies on an filter to split the keyspace.
> So you're having each task skip whatever candidate passwords are
> supposed to be tried by other tasks.  Now, what happens if the nodes are
> not of equal performance?  You deviate from John's semi-optimal order in
> which candidate passwords are normally tried, likely trying them in
> not-so-optimal order.

Well, cluster programs generally assume equal-performing nodes.. It's rare
that they don't. At any rate, if you have a 10 cpu cluster, and half the
cpus are, let's say, only 66% the speed of the other half, then you would
assign particular tasks to particular node names (using a carefully
constructed machines files) and modify the external filter so that  the
faster cpu/tasks get 33% more keys than the other. The filter logic would be
a tad bit more complex, but would only amount to a few cycles per key per

What happens if a node fails?

If MPI is working properly, the whole job fails, and you would restart the
job in recovery mode. If MPI implementation is buggy (like most are), you
could implement some kind of MPI_Barrier in the sig-alarm handler so that if
a node fails, everything stops.

What if you want to add/remove nodes?

Removing nodes: I'm not sure ... I'd want to make sure that the removed
nodes weren't working on keys that no longer get generated after a restore.

Adding nodes:  I'd have to look at your restore file layout/logic to see if
restore files could be cloned as dummys for the new nodes. A restore, but
with more nodes, would work fine, since the external filter use the current
MPI job's Rank and Size.

>     o  Scalability:  using the external mode to split the keyspace has a
> > minor performance impact, but it seems to be very, very minor. I will do
> > some performance analysis on it.
> The impact is minor with slow hashes and multiple salts.  The impact is
> major with fast saltless hashes.

What's an example of a fast, saltless hash? I'll benchmark it.


Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ