|
|
Message-ID: <CAEo4CePhvxJbF-njJA23k+JxkTkcZGYQC+o_w+TOevdFSogNfg@mail.gmail.com>
Date: Sun, 17 Jul 2016 14:37:04 +0200
From: Albert Veli <albert.veli@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: Loading a large password hash file
On Wed, Jul 13, 2016 at 4:07 PM, Solar Designer <solar@...nwall.com> wrote:
> On Wed, Jul 13, 2016 at 09:11:24AM +0200, Albert Veli wrote:
> > For the record, on my computer it's faster to split the hashfile and loop
> > than waiting for the whole file to load. About 10 million lines per
> > hashfile seems to be a good value for my computer:
> >
> > split -l 10000000 huge_hashlist.txt
> >
>
> Preferring to split at 10 million lines is unreasonably low for the
> current code. What version of JtR are you using?
I use the jumbo version from git.
> You might want to set "NoLoaderDupeCheck = Y" in john.conf, especially
> if your hash file is already unique'd.
Yes, I ran unique. 93 million hashes in total after unique. But I had
NoLoaderDupeCheck
= N in john.conf. That is probably why it took so long to load.
Another thing I noticed is that running --show=left and saving that output
to a file, after that it loaded faster. The format had changed. Now the
hashes are stored in a format like this:
?:{SHA}Abi8FO/Nqwx9rAnk+2BgQmIIbeY=
Which I guess is base64 encoded binary data, instead of the ascii hex
format that the original file had. It looks like the bas64 encoded hashes
loads faster.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.