Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 09 Mar 2010 23:17:48 +0100
From: "Magnum, P.I." <>
Subject: Extended MPI support

I just uploaded an extended MPI patch to the wiki at

The patch is supposed to be applied after 1.7.5-jumbo2. It's still just 
a toy but it's fun to play with if you happen to have a multicore host 
or even a multihost cluster.

Basically this is just the good old mpi10 patch with further hacks. Like 
before, the MPI interface is not really being used much more than for 
every process to know its own ID and how many more there are.

This is just experimental code. Most importantly, do not blame Solar 
Designer if you encounter problems. Do not blame me either, instead just 
fix it and submit your patch ;)

===Support for all cracking modes===
In Markov mode, this means auto-splitting the range across all nodes, 
just as you would do manually as described in the wiki. The rest of the 
modes are more or less silly hacks (leapfrog rules if used, otherwise 
leapfrog words) and there are lots of possible optimizations yet to be made.

===Support for stopping/resuming MPI jobs===
I assumed the Incremental mode already supported resuming in mpi10, so I 
haven't double-checked that (anyone confirm?). The rest of the modes are 
fairly well tested though, they should resume correctly without losing sync.

===Other minor changes===
These are various little things I personally wanted, take it or leave it:

* The rec file indexing is now placed before the .rec extension, so 
files will be called job.n.rec instead of job.rec.n
* If running without mpiexec (or just on one node), the .rec file will 
not have the .0 added to it, so when using this MPI enabled john like a 
"normal" one, you won't notice any difference.
* --status will show an extra line telling the total number of guesses 
as well as aggregated cps.
* Many error messages are only displayed by node #0, to avoid silly 
repetitions of output.
* If you try weird things like running -stdin or -show on several nodes 
in parallel, john will tell you it's not a good idea and bail out.

===Still missing features===
There still is no inter-process communication of cracked hashes. This 
means that if one node cracks a hash, all other nodes will continue to 
waste time on it. This same problem exist when running several instances 
of "normal" john manually - the only workaround is to periodically abort 
and resume jobs.

There really was no rocket science involved. Like the mpi10 patch, my 
extensions are licensed "under the same terms as John the Ripper itself" 
as stated in doc/LICENSE.mpi. In case that does not make sense, my work 
is hereby placed in the public domain.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.