Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 26 May 2011 08:21:45 +0200
From: Frank Dittrich <frank_dittrich@...mail.com>
To: john-dev@...ts.openwall.com
Subject: Re: BSDI poor in OMP4/OMP7 when doing real work

Am 26.05.2011 05:40, schrieb JFoug:
> Would things be sped up, by placing all candidates of a specific salt
> into a single CPU/Thread ?  Thus if there are 4000 salts, and this is
> spead over 4 CPU, then each CPU would be indpendantly working on 100
> salts.

s/100/1000/

It has to, especially for slow hash algorithms, since computing a hash
is much slower than just comparing hashes.
The only problem is:
If you crack a lot of passwords, one core might run out of hashes
faster than another.
Even if no cores run out uncracked hashes, those cores with fewer
remaining salts will process their password candidates faster than
the others.

So it gets more complex very fast.

You might need to redistribute the password candidates
from time to time, to make sure the number of different salts
is evenly distributed among the cores.

Even if you manage to evenly distribute the number of salts,
a core with very few remaining hashes will nevertheless process
the password candidates faster than those with many hashes.
The faster the hash computation is (compared to the time it
needs to compare 2 hashes), the more the processing speed of
a single password candidate will differ.

But in most cases, you'll not even be able to distribute the hashes
evenly according to number of salts.

Say you have 1000 cores, but 1600 (or 750) different salts.

For 1600 salts, you could distribute those 400 salts with the largest
number of uncracked hashes on an individual core, and the remaining
1200 salts to the other 600 cores.
(Just make sure you don't distribute 2 salts with just one uncracked hash
to the same core. You could combine 1 salt with "many" hashes and one
salt with few hashes.)

For 750 hahes, you could split the 250 salts with the largest number of
uncracked hashes to 2 cores each, and let the remaining 500 cores
process one salt each.

There is still a chance that some cores run out of work sooner than others,
because the remaining passwords got cracked.

Now you either have to store the progress per individual salt in your
.rec file, or you have to make sure that just the progress of the
"slowest" core is saved in the .rec file.

If you interrupt the cracking session, you could try to redistribute
the word among cores and process all those password candidates
that have been tried just for a few salts / on a few cores.

But I don't think it's such a good idea to do this every 10 minutes
(Or whatever crash recovery file saving delay you specified in john.conf.)

I admit I didn't study the code. May be OMP does already require much more
complexity than I imagine, and adjusting the logic of distribution among the
cores isn't that much of a problem.

Frank

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ