Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 29 Jan 2013 10:55:31 -0600
From: "jfoug" <>
To: <>
Subject: Speeding up WPAPSK, by leveraging salt shortcomings

I have mentioned this to magnum (offlist) and will now bring this to the
full user base, to see if someone has a way within existing JtR to do this.
Also, since Alexander has been pretty active recently, and knows the
internals of JtR like no other, he may have a good/easy way to do this.


What I think we need to do, is to somehow treat WPAPSK as a 2part salt.  I
am not sure this is easily workable in JtR.  I can think of a way to do
this, but it is far from elegant.


Here is the issue.


There are 2 salts and steps with the format.  One of them is very time
consuming, but the salt is VERY weak.  The other is quick, but the salt is
VERY strong.  However, the full salt IS needed (I believe), to be able to
track backwards to the original input hash line.


The first 'salt', only uses the password, and the SSID.  The password is
fully know (during our run), and the SSID is almost always text, and set by
the user.  It is usually VERY weak, and not very random.


These 2 weak salt items (SSID and password) are the only items used in the
PBKDF2 part of the hash. Often, the same SSID will be used by many different
physical devices (such as netgear, NETGEAR, linksys, homenetworking,
the-house, etc).   Also, if a user captures multiple hashes for a single AP,
say for instance war driving, then there will be many different hash lines
within the JtR input file, but ALL of them will have the same SSID (first
salt).  Now, we cannot say for sure if we can simply drop all but one of
these hashes.  It is possible that there is more than one password
represented in all of those hashes, no one can tell, until they are broken,
or until 1 of them is broken, with the others being NOT broken


Now, for example, let's say we have 10 hashes from the same AP, that has the
SSID of 'Upstairs_Router', but we do not want to drop any hashes, on the
chance that we have more than one password (a valid assumption).  What the
current JtR does, is takes 10x as long, to test each password we want to
test, since JtR fully processes a PBKDF2 for each salt (10 of them), for
each password.  However, we could simply compute the PBKDF2 one time for
each password, then simply do the tail 'post' processing 10 different ways.


So, in reality, we do have 10 unique salts.  However, during our call to
crypt_all, we should only perform the 2 block PBKDF2(4096) one time.
However, the way JtR 'works', is it calls 

while (more_salts) {






So, we are presented with each salt independently.


What we need is something like this:


while (more_outter_salts) {

   set_salt(cur_outter_salt++) {


   while (more_cur_inner_salt) {


     cmp_all() // cmp_all/cmp_one/cmp_exact does the post processing, i.e.
the 2nd part of the salt




Now, I have thought of doing this within existing JtR, by doing this:


0. build a structure to hold the results of a PBKDF2 run, along with the key
of the SSID.  

1. during loading, compute how many unique 'inner' salts we have.

2. If there are only single instances of each SSID, then behave just like it
does today.

3. if there are multiples (even if only on some SSIDs), then build a hash
table to hold the results structures.

4. at crypt_all, before we do the PBKDF2, we check the hash table.  If we
find data for our SSID, we use it, saving ourselves that long computation.

5. If the PBKDF2 for our SSID was not found in the hash table, we compute
it, and store it into the hash table.

5. at clear_keys, we empty our hash table.


I am pretty sure this algo would work. But it is far from elegant. Also, I
am not sure how well it will scale to the GPU code (I have not even looked
at that).  However, the more parallel cracks being done, and the more unique
SSID's in the input file, would require more and more memory, to hold the
temp results.  This could be reduced some, by making it smart enough, to not
add any hash records on SSID's that are unique in the input file.  Now, if
you had an input file of 200, where you had one SSID on 2 hash inputs, and
one SSID with 10 hash inputs and one SSID with 100 hash inputs, and the
other 88 were unique SSIDs, then it would only need 3 records in the PBKDF2
'hash' table to store the temp results.  But in this situation, we have gone
from 200 PBKDF2's per password, down to 91 PBKDF2's.


Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ