Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 19 Apr 2012 17:34:44 -0500
From: "jfoug" <>
To: <>
Subject: 'New' functionality added to JTR (finding salted passwords, without salts)

I have been working on some code. First 'outside' of john, and then later
properly (or hacked'ly) incorporating the logic into john.


This all started from that 143 million hash dump from KoreLogic.  I knew a
lot of them were dirty 'salted' values. I have run into this in the past.  A
lot of these showed up in the forums of InsidePro, and I think many of these
hashes have been scooped up from there, and put into this hash dump.


But the quandary, is HOW to find passwords if the hash is salted, but you
only have the hash, and not the salt.


Well, for a 'FEW' hash types, we can find passwords, by brute forcing the
salts.  Here is how I started doing it (but did not carry on fully, due to
too much overhead, mostly file IO).


1.       Start with a VERY good password DB (JtR's password.lst is a very
good starting point).

2.       Build a script that will take each word from there, and build a set
of input strings, which can be tested using MD5-raw (in the 2 cases I have
done, they both use MD5).

3.       Run the hashes, using this large generated file as the dictionary

4.       Carve off the salt part from the found hashes, and re-build a
proper dynamic hash, using the hash, and the 'found' salt.  This can simply
be re-written as a 'proper' john.pot line.


Well, the 2 formats I have been working on are:   PHPS/dynamic_6, which is
md5(md5($p).$s) with 3 byte salt, and osCommerce (dynamic_4), md5($s.$p)
with a 2 byte salt.  Both of these formats have salt values from a space
char, up to 0x7e (the '~' tilde char).  From some initial testing, I found a
lot of the PHPS (I have about 800k of them), and also there are a lot of
OSCommerce (I am just starting to collect them).


However, using 'external' modes to search for this is VERY expensive.  For
the PHPS type, there are 95^3 salts, so for EACH password you are going to
try, you have to md5 that password, and write 857375 lines that are that md5
hash, with 3 characters appended to it.  So, building this 'dictionary'
file, and then later reading it back in on the john run, takes a HELL of a
lot of time, and overhead.  The osCommerce is 'better' (but still very
wasteful). It requires writing 9025 lines, that are 2 bytes of salt with the
password appended to it, for each password.


So, I have made some changes to john.


1.       I added a command line param  (--regen-lost-salts=#  # right now
can be 1 for PHPS or 2 for osCommerce).

2.       I added a new 'thin' format, OSC.  I did not want to hack these
changes into dynamic, for dynamic_6 and dynamic_4, so it was added into the
2 thin formats, phps_fmt and the new osc_fmt.

3.       When the -regen-lost-salts=1 is used, and the type is -form=phps,
then the phps format will load 'raw' 32 byte hashes, without salts, and it
will add a dummy salt to each one (the exact same salt).

4.       If -regen-lost=2 and -form=osc was used, the exact same logic
(loading raw hashes, adding a dummy salt) is done within that format.

5.       Both of these formats use dynamic (dynamic_6 or dynamic_4).

6.       When john loads the file, IF -regen-lost-salt was used, then john
will walk the salt array, and find the last salt. This salt record will
contain ALL of these just loaded 'raw' hashes, with 'fake' salts.

7.       We then allocate space for 9025 salt records, or 857375 salt
records (depending upon 2 byte or 3 byte salt).  

8.       Then each of these salt records is loaded. ALL of the data from the
salt that was loaded by john, is assigned to each of these salt records.

9.       Each salt record will get the 'proper' salt. So we start with '  '
then ' !' then ' @' , etc, up to '~~'  (3 byte salts would be built the same

10.   Then we simply let john run the way it wants to.  We have actually
inserted an extra 9025 (or 857375) salts into the runtime, and we have
pointed EVERY single hash into each of these salts, so that john will be
able to compare ALL hashes for each salt tested.

11.   If a hash gets cracked, then the salt FROM the cracked value is shoved
into the data, BEFORE the full test is done.  This is because we do not yet
have the 'right' salt in our likely hash. 

12.   The hash gets written to the .pot file as a $dyamic_4$ or a $dyamic_6$
hash (default behavior for thin formats already).  These output .pot lines
will have the PROPER salt added to them.

13.   Within the loader, and .pot removal logic, I also had to add logic (in
the -regen-lost-salts mode), so that hashes found in the .pot file are
removed.  This was done because at that time (loader), we still do NOT know
the correct salt. Thus if the hash matches, we have to 'ignore' the salt.


This change also incorporates some fixes to the logic.  These hashes contain
'bad' characters.  So, I was not loading data properly within the .pot
loading.  So I have included these changes (which really ARE a separate
issue, but were found here).


1.       At .pot writing, IF the format is a dynamic format, then we call a
new function which will return a possibly 'fixed' hash string.  

2.       If the salt contains any ':' chars, or any '$' chars, then the salt
gets converted into a $HEX$.  This is ONLY at .pot writing.

3.       Within the dynamic 'prepare' function, we reverse this logic (and
remove all $HEX$ data). 

4.       The only time that data (the $HEX$) is left intact, is if there are
NULL bytes within the HEX.  If that is the case, we leave it alone and
handle that HEX decoding at salt loading time

5.       This 'fix' in the prepare, gets the line in proper format, to do
proper comparisons at .pot loading / .pot removal.


I am not quite ready to release the patch(es) just yet. I am 90% sure that
the -regen-lost-salt logic is fully working, and should NOT impact normal
running of john.  However, I am not 100% sure of the $HEX$ to pot, and
un$HEX$ at pot load is fully and bug free working.  Also, I have not
documented these changes at all.


But I will try to get this done, this weekend at the latest.


As a note, the PHPS is very slow.  It will only do a couple 'passwords' a
second, so you really have to have a GOOD password list, and lots of
possible candidates.    The OSC is much faster (95x faster).  It might be
fast enough to do 'some' rules logic, on VERY good password DB's and
possibly a very limited amount of -inc work.  The OSC is likely slower than
MSCASH2.  The PHPS is much slower.






Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ