Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 1 Feb 2013 07:18:33 -0600
From: "jfoug" <jfoug@....net>
To: <john-dev@...ts.openwall.com>
Subject: RE: Speeding up WPAPSK, by leveraging salt shortcomings

Here is the patch.  I have kept my debugging code in it (for now).  The
debugging is mostly the extra data in the structure, and using an array, vs
a memory allocated array.  But the debug code is wrapped with a #define, so
unless that is defined, the extra crap will not be there.

The very first function in the patch file (the salt_compare) would be the
function to be placed into the format structure.  That function could do
whatever it wanted, to a pair of CONST salt values, to determine which one
sorts before the other (or if they are the same in sorting value). The
actual function pointer passed into qsort is the next function, and it will
stay in loader.c   That function understands the structure used to build the
array of link list salt items, and knows how to handle a db_salt structure
(to find the salt value).  This causes 2 layers of indirection per qsort
compare call, but it is very little extra overhead.  It is some pointer
math, and the cost of calling another function.  It should not even be
noticeable.

The ldr_salt_sort is a static function, and simply checks to see if this
format should be sorted (right now, does the strncmp of "wpapsk"). Also, it
will quick bail out if there is less than 2 salts.  I believe this 1 salt
check will cause any unsalted to bail out quickly.

This patch is against unstable. There should be no side effects except for
runs against wpapsk hashes.  For later building into bleeding, we could add
the salt_sort method to the format structure.  Then build a
fmt_default_salt_format and set all formats to use that.  Then move the
salt_sort function from loader into wpapsk*_fmt.c   The ldr_salt_cmp would
call the format's salt_sort.  And the check within ldr_sort_salts would be
changed from the strncmp of wpapsk to a check to see if the format's
salt_sort method is pointing to fmt_default_salt_sort or not.

But I have not done bleeding yet.  the code as it stands should also work in
bleeding, without change.  It will simply keep it's hard coding to wpapsk.

Jim.  

-----Original Message-----
From: magnum [mailto:john.magnum@...hmail.com] 
Sent: Friday, February 01, 2013 3:18
To: john-dev@...ts.openwall.com
Subject: Re: [john-dev] Speeding up WPAPSK, by leveraging salt shortcomings

On 1 Feb, 2013, at 10:06 , magnum <john.magnum@...hmail.com> wrote:
> On 1 Feb, 2013, at 2:33 , <jfoug@....net> wrote:
>> But all in all, this was a pretty easy change.  I just had to step code
for a while, looking for where to put it, and then dump some structures,
looking for how it was all put together.  I'm not quite sure where to go
with this code right now.  It needs a little cleaning, but might not be bad
for J8.  It would only execute right now, for wpapsk formats, that have more
than 1 salt, so 'should' be safe.  But I will wait for instructions from
others.
> 
> I see no problems with adding this new format method except I don't want
to divert from core (or next core) if I can avoid it at all. If Solar agrees
to add it for 1.8, I'd like to add it to bleeding right now.
> 
> Otherwise perhaps we could try my idea... Will it slow loading of 10,000
crypt-md5 salts? Probably not much. Would it hurt any format? Not likely. It
will be less flexible for future formats though.

We have a third option of course: Apply the patch as-is, with "if wpapsk" in
loader.c. I do not mind that at all for now. In fact we should definitely
add it as-is to unstable/Jumbo-8 regardless of what we do to bleeding. So
please post the patch!

magnum

Download attachment "sort_salt.patch" of type "application/octet-stream" (6726 bytes)

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.