Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 17 Jul 2016 17:46:07 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Loading a large password hash file

On Sun, Jul 17, 2016 at 02:37:04PM +0200, Albert Veli wrote:
> Yes, I ran unique. 93 million hashes in total after unique. But I had
> NoLoaderDupeCheck
> = N in john.conf. That is probably why it took so long to load.

Like I said, this only makes for a 2x (or so) difference in loading time -
significant, but hardly enough to make it optimal to split at as few as
10 million hashes.

You never mentioned how long it actually took to load your 93 million
hashes.  I'd expect under 3 minutes with dupe check still enabled, under
1.5 minutes without.

> Another thing I noticed is that running --show=left and saving that output
> to a file, after that it loaded faster. The format had changed. Now the
> hashes are stored in a format like this:
> 
> ?:{SHA}Abi8FO/Nqwx9rAnk+2BgQmIIbeY=
> 
> Which I guess is base64 encoded binary data, instead of the ascii hex
> format that the original file had. It looks like the bas64 encoded hashes
> loads faster.

That, or it could be that you have significantly fewer hashes left to
crack, so you get time savings from not needing to load and eliminate
already cracked hashes.

Alexander

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ