Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 23 Nov 2005 02:55:49 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Speed up John

On Tue, Nov 22, 2005 at 11:36:58PM +0100, Frank Dittrich wrote:
> Assuming the password can contain up to 8 characters, and a
> character set of 95 different characters, you'll get about 6.7e15
> different passwords.

That's correct.

> The master could just once precompute all password candidates,

Unfortunately, it would take one of the fastest CPUs available today
almost 30 years to do that with the code currently in John.  (Of course,
that code can be optimized somewhat and that would need to be done
if/once I add support for FPGA-based coprocessor cards.  But the time
to pre-generate all password candidates on one CPU will remain to be on
the order of years.)

Obviously, you do not really have to wait for the master to complete
this task before you can start assigning .rec files to clients as you
propose:

> and save the corresponding .rec files after 100 million new passwords
> have been created.
> Then, just store the status information of these .rec files in a DB.
> Re-generate .rec files for the clients by adding the --format option ...
> and let each client ask for a new .rec file after 100 million passwords
> have been tried.

100 million candidate passwords is quite a lot for slow hash types
and/or large numbers of different salts.  Even with the traditional
crypt(3) and fast CPUs capable of 1M c/s, it would be taking each node
almost 5 days to complete the 100M passwords for all 4096 salts.  It
means that on the first 4 days, all but one node would be trying
somewhat unlikely candidates, whereas a lot of more likely passwords
would be "reserved" to be tried by the first node later.

With slower hash types, things get a lot worse.

Obviously, smaller chunks could be used, but then the .rec files would
not fit on a modern hard drive.  So you'd have to give up on your
one-time precomputation idea (or only do that for a portion of the
keyspace).

The good thing is that the .rec files can be generated without having to
actually generate all candidate passwords, -- but that would require
messing with John internals (and I do consider the undocumented .rec
file format to be part of the "internals").  It is my understanding that
you wanted this proposal to not require that (since a lot better
approaches are possible otherwise and I had described one of those).

> I did not check whether john creates a .rec file with --stdout.

It does, although that doesn't make much sense since with many uses of
--stdout the candidate passwords are getting buffered by external
software before they're being tried.  This means that some candidate
passwords already output by --stdout are actually getting lost whenever
the session is interrupted.  So John would need to maintain the state it
saves into the .rec file behind by a few hundred of candidates.  Maybe I
should actually add an option to specify that .rec update delay.

Of course, this problem I've pointed out does not affect your proposed
abuse of --stdout. :-)

In case you're curious, while with the password hashes supported by
John itself there is _also_ a lot of internal buffering going on, John
knows just when its candidate password generator is in sync with actual
passwords tried and it will only update the .rec file at those times.
So there's no artificial/arbitrary update delay needed.

-- 
Alexander Peslyak <solar at openwall.com>
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598
http://www.openwall.com - bringing security into open computing environments

Was I helpful?  Please give your feedback here: http://rate.affero.net/solar

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ