Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 26 Apr 2008 09:07:22 +0200
From: Simon Marechal <>
Subject: Re: working with large files

I'm often working with 500k/1M saltless passwords. An usual task is to
run john on LM hashes for a few minutes to prune the file before doing
the whole rainbowcrack stuff.

Solar Designer wrote:
> once.  I will likely deal with that by introducing two additional hash
> table sizes - with 64 Ki and 1 Mi entries - or maybe by introducing
> arbitrary-size hash tables (although the hash functions for those would
> be slower).

Here, for a 1M pwd file the 64K entries hash table is 5 times faster
than the standard one (when few passwords are found). This is definitely

> negligible for sessions that run for weeks or months.  In practice, all

My issue here is that I do not want to run john for mouths before
running rainbowcrack on the hard hashes. I want to spend a few minutes
to remove the low hanging fruits.

> This is what I meant by "updating the existing per-salt hash table"
> above.  There should be no side-effects if you do it right (and if the
> hardware is not faulty).  Also, don't forget to update the "count".

I will try to do that properly next week.


To unsubscribe, e-mail and reply
to the automated confirmation request that will be sent to you.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.