Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Jun 2006 16:23:58 +0400
From: Solar Designer <>
Subject: Re: to MPI or not MPI john?

I wrote:
> >Of course, this "word checksumming and skipping" approach is far from
> >perfect.  It won't scale well, except with slow salted hashes (and
> >multiple salts actually loaded).

On Tue, Jun 06, 2006 at 01:03:12PM +0200, Otheus (aka Timothy J. Shelling) wrote:
> This I have trouble grasping. Is the problem that with unsalted or fast
> hashes,
> the word-checksumming-and-skipping approach spends a disproportionate amount
> of time generating words that are never hased ?

Exactly.  And things become worse as the number of nodes grows.

> If that's the problem, then I should point out that with unsalted hashes,
> it's clearly optimal to distribute the target patterns (the hashed
> passwords) amongst the nodes.

I re-read the above several times, but I still don't understand you.  If
I parse the above literally, you're suggesting that with multiple
unsalted hashes it would be optimal to have different nodes compare
against different hashes - but that is clearly not true.  In fact, the
opposite is true - having different nodes crack different hashes is only
reasonable with multiple salts (then you group the hashes by salt and
distribute the salts to your multiple nodes).  ("Reasonable" is the
right word to use here - it's not perfect, it's just reasonable.)

> Are there "fast and salted" hashes I should be concerned about?

Yes - but only when the number of different salts is low.  The
traditional DES-based crypt(3) would be an example.  John may compute
around 1 million of such hashes per second, which is comparable to the
rate at which candidate passwords might be generated.  With 10 nodes,
you'd filter() out 90% of those candidate passwords on each node - and
the checksum function and/or filter() itself are costly, too - so you
might make things several times slower for each node when cracking a
single salt.  With 100 nodes, you might see no speedup over 10 nodes at

Of course, with 1000 hashes and 900 different salts loaded for cracking,
things won't be that bad even with 100 nodes.

> If so, what in your opinion would be the best way to parallelize it?

There's no single "best way".  Simplicity and generality are advantages
of your approach - and at least one of those would be lost as we attempt
to solve the scalability problem.

For "incremental" mode, I'd use the approach that I had briefly
described on this list before:

This is specific to "incremental" mode (no generality) and it requires
communication between the master and its slave nodes (no simplicity).

For wordlist and external modes, I think I would simply have the master
feed candidate passwords to a reasonably small number of slaves (yes,
that won't scale to large numbers of nodes since the master system and
network bandwidth would become the bottlenecks).  This is only needed
when the rate at which candidate passwords are tried is not too high,
and this trivial approach works fine precisely in those cases.  With
fast saltless hashes, wordlist-based cracking is really fast anyway, so
there's no need to use a lot of nodes while in wordlist mode.

I don't recommend that you go for this full-blown distributed processing
implementation now.  I think that it'd be more productive for you to
develop a reliable patch implementing your current approach first.

Alexander Peslyak <solar at>
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598 - bringing security into open computing environments

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ