Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 31 Aug 2005 00:31:53 +0400
From: Solar Designer <>
Subject: Re: trivial parallel processing (4 CPUs)

On Tue, Aug 30, 2005 at 10:27:04AM -0600, Stephen Cartwright wrote:
> Breaking the password file apart randomly causes duplicate salts to be used 
> across the password files, but what if you broke the password file apart 
> based upon salt?
> Then for every salt, there would only be 1 set of passwords still. right?

Yes, and this works fine for most salted hash types, but with bigcrypt
there is the difficulty so nicely explained by Frank.

Also, whenever you split your password files rather than your keyspace,
you increase the key generation and setup "overhead".  I'll try to
explain this.

John obtains (or generates) its candidate passwords to try using certain
algorithms (cracking modes).  While those algorithms are pretty
efficient, they nevertheless do consume quite a few CPU cycles to
generate each candidate password.  With the more advanced (slower)
password hash types, this "overhead" is negligible, while with the
weaker/quicker ones (e.g., LM hashes) it has to be taken into account.
The traditional DES-based crypt(3) hashes are a marginal case, with the
password generation "overhead" being very low but still measurable.

More importantly, John is smart enough to separate the "key setup"
from the actual hashing (the details of how this may or may not be done
are specific to each hash type).  The key setup only needs to be done
once per candidate password, whereas the hashing has to be done for each
{candidate password, salt} combination.  For traditional DES-based
crypt(3) hashes, the cost of key setup is roughly 10% of the cost of
actual hashing.

So we've got two tasks which are done per-candidate rather than
per-salt.  When running multiple instances of John with each trying the
same candidate passwords (even against different salts), you have each
instance do some duplicate work (the same that other instances do).

If the number of salts per John instance is not very low, this overhead
will be acceptably low (e.g., around 1% for traditional crypt(3) with 10
salts per John instance).  However, if you would be splitting just 4
different salts across 4 CPUs in this way, the overhead could be 10%.

Of course, if splitting password files rather than the keyspace happens
to help achieve a more optimal order of candidate passwords, that may
make up for the overhead.  So it is not obvious which approach is better
after all.

Alexander Peslyak <solar at>
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598 - bringing security into open computing environments

Was I helpful?  Please give your feedback here:

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ