Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Sat, 25 Jul 2009 14:37:28 -0500
From: "JimF" <>
To: <>
Subject: Work (optimization) in progress, and some ideas

Right now, I am finishing up some changes to the markov generation code. I 
have gotten that working about 5x faster than before.  The order of the 
generated words is different but overall, where a word is generated in the 
old code, vs the new code is still very close (just the exact order is 
different).   I do have the code generating 100% the same words, but the new 
code is not complete yet.  It can not restart from a known starting place, other
than the begining at this time.  Thus, resume does not work, and you can not 
properly have john split the work over several system.  However, that code 
will be done soon (possibly this weekend), and I will get patches in and 
document the changes.  In real world speed testing, I am see about a 15% 
improvement in speed for the raw-md5 search I am doing (10mil/second 
base algorithm speed, non salted, over 30k uncracked candidates)

When that is done, I have a few other 'loose' ends to finish up on some 
other code that another member posted to me.  Then I will start looking at 
any possible optimzations to be found in the rule processing code.

I think the code to do prepending is one of the slower cases, and I am sure 
I can speed that up, but how much, I can't say yet.

It would be nice, if we could enhance performance of john, to where it was 
running at close the the maximal speed of a given algorithm, even if using 
super fancy word generation techinques.  These fancy word techniques are the 
core of the greatness of John.  For a raw-md5 search, I can use use 2 runs 
of john that 'bench' at about 10 million/s each, but right now, the best I can 
get, is about 4.5m/s each, (against 30k uncracked list).  I could also run some
specialized (non-john) software that uses both cores, and the GPU and gets
about 250 million cracks/s, but john will greatly outperform due to handling 
all 30000 candidates each time it tests a work, AND due to john being able
to handle arbitrary passwords and having VERY powerful generation
techniques, where the other tool has to do a brute force  'dumb' increment,
so that password #2 looks very close to password #1, etc, etc.  

But all in all, the closer we can get john to the --test speed, in real world 
cracking, the better. Once that overhead is minumal, then getting 150% 
speedup in the format processing, will net you 150% faster speed.  Right now, 
if you are doing complex stuff, you get a lot less of that 'improvment' when the 
algorithm (which SHOULD be the bottleneck in speed) is improved.

Back to my hole, for more coding :)  Jim. 

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.