Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 9 Dec 2011 08:24:55 -0600
From: "jfoug" <jfoug@....net>
To: <john-dev@...ts.openwall.com>
Subject: RE: cracking RADIUS shared secrets with john the ripper

>From: Solar Designer [mailto:solar@...nwall.com]
>
>On Tue, Dec 06, 2011 at 12:49:48AM +0100, Didier Arenzana wrote:
>> I have added a patch, and a zip file containing a perl script to the
>> wiki, with a page with brief instructions on how to use both of them
>> to crack RADIUS shared secrets :
>>
>
>magnum, JimF - can you get this into your trees and (one of you) upload
>it as a numbered patch such that it is given some testing (on your
>systems) and is not missed when I work on a new -jumbo?  I think this
>patch should be adding the Perl script as well.

I have taken a different approach to this.  If we want to get this out ASAP
into jumbo, then we should probably place Didier's cut into whatever jumbo
we create.

The direction I am going, is to do some rework, and enhancements of the salt
'system' in dynamic.  Within that format, the salt logic is pretty tricky,
and is actually a relatively large part of the format.  This is due to being
generic, and having a salt, or 2 salts, using the user name, which is placed
into the salt (at runtime, the salt is the ONLY field that can provide us
this type of data).  Also, the format allows any of the 10 'fields' from the
input file to be available.  Thus, by being as generic as possible, we have
a pretty complex system.

I also noticed that where Didier put the code, caused processing each time a
salt was set. But I started looking at the code, and there was other
processing also being done.  The string was scanned for $$2/$$U/$$Fx
signatures, and those items were pulled out.   Also, due to the format not
really knowing just what 'should' be in a salt, it had to make many
assumptions on the size of the salt, thus, john is storing a very large
amount of salt.  Also, at the very least, each salt was being memcpy from
the salt, to a temp variable (or set of them, if there were $$2 or other
salt parts).  This was due to the salt being a string, but possibly not a
usable string, because it could be packed with other salts.   One other
issue which the new HEX$ type brought up, is that we can have the same salts
entered in multiple ways, that the current code would treat as different
salts. 

$123  and $HEX$313233  are both exactly the same, but would be treated as
different salts by john.  Also,   $salt$$2salt2$$username  and
$salt$$Uusername$$2salt2 are exactly the same salt, but treated as 2
different ones.

So, I have set out to make some changes in dynamic's salt methods.

I have several goals (in priority order)
1. Normalize the salts, so that the above duplicate salts would showup as a
single salt.
2. Reduce memory usage, where I can 
3. Increase performance rate.
4. Reduce loadtime.
5. Reduce code complexity (trying to eliminate errors).

I am now on my 3rd rewrite.  The first one I scrapped.  The 2nd, used way
too much memory.  It did get load times 10x faster (I have a 500k hash file
generator, listed later in this email).  Load time went from about 75s to 7s
or so.  Performance on that version was just a tiny bit (about .5% slow),
than the original.

So, I went back to the drawing board, started from the original salt, and
put some of the logic from my 2nd try, into the original salt code, leaving
the original code somewhat alone, and simply adding the normalization, and
hashing to the original code.  The salt duplicate removal is done WITHIN
dynamic format, and the salt data is 'owned' by dynamic, and only a pointer
is returned to john. This is so we can have a small size memory footprint.
The format can allocate only what is needed, which can be different for each
candidate, while john only has to store 4 or 8 bytes, which is a pointer to
the prepared salt object.

So with this new version, memory size has gone down.  The 500k salts,
dynamic_1 test file, went from 79mb to 72mb on my test run.  The load time
is 'almost' as good as the version 2.  It went from 75s down to about 9s.
The speed is about 3% faster for the 'many salts' speed, which is what I was
hoping for.  The salts have been 'normalized', and when I get some bugs
worked out, will have all duplicate salts properly removed.   

This code also has Didier's HEX$ code in it (or code that works the same).
This code is run for all salts, for all salt2, user name, and F[0] to F[9]
data.   The only time we really 'need' to use the HEX$ packing, is if there
are NULL bytes, ':' chars, or \r\n chars in the word.  Since most of john
uses C file and string functions, and uses ':' separator in it's CSV file
format, then those chars are off limits.  This new HEX$ change, could also
allow us to deprecate the --field-separator-char=X option, and all of the
code there.  We may need to keep that option for a while, but the reason I
added that, was due to salts which contained ':' chars.

Also, within dynamic, there is bulletproofing code, that has a
'salt-max-size' value.  Since the strlen(HEX$salt_in_hex_digits) is 2*x+4
the strlen(salt), I have made code changes to the valid() function, so that
it detects the HEX$, and properly computes the length of the salt, and only
scraps the candidate if the real salt length is over the max length.  This
was the problem Didier found in dynamic_1 which forced him to use
dynamic_1008 (which does not properly length check the salts).  With this
change, dynamic_1 works just fine.  The salts Didier has are only 16 bytes,
and the max for dynamic_1 is 32 bytes.

I still have a bug, and duplicates are not being removed.  I will get that
fixed shortly.  I will then make sure it passes all test suite, for all
builds, and also build on Sparc64 (and Linux x64), and validate all of those
builds also.   It is a pretty decent sized change.

The only one of the goals not achieved was #5 the Reduce code complexity.  I
have pretty much kept all of the original code (with some minor changes),
and added 2 steps to it.  I now have a 'normalization' step, and then a hash
list search to find this salt, or to allocate memory, and add this salt.
Either way, we return just the pointer to the salt data.  The salt_hash,
set_salt functions HAVE been reduced in complexity, and performance has been
improved.  But the complexity added to salt greatly outweighs the complexity
reductions in those 2.  However, the actual amount of code being run IS
packed into the salt() function, where it should be, and the set_salt() has
been made to run faster.

Once I get this version debugged, and fully tested, I will get it uploaded
to the wiki.

Jim.



Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.